Preview

Digital Law Journal

Advanced search

Can Artificial Intelligence Replace Human Judges?

https://doi.org/10.38044/2686-9136-2024-5-1

Contents

Scroll to:

Abstract

The essay highlights the positions of Russian legal scholars, specialists in procedural law, according to whom modern technologies should change the very essence of legal proceedings. The process, in particular, will consist in the fact that sooner or later a human judge will be replaced by artificial intelligence, and digital legal proceedings will take place according to new principles. For their part, the author of the essay puts forward arguments justifying the value of the classical principles of legal proceedings, primarily listed in Art. 10 of the Universal Declaration of Human Rights of 1948, since only their action creates a regime in which justice is administered, the highest quality of protection of subjective rights, freedoms and legitimate interests is ensured. Special focus is put on the thesis research of Danil Olegovich Drozd on the topic “The Procedural Forms of the Use of Artificial Intelligence Elements in the Modern Commercial and Civil Litigation” (Moscow, 2024), the author of which proposed options for adapting the known principles of the judicial process to artificial intelligence. In response, arguments are given in favour of the fact that it will not be possible to adapt the key principles of justice to artificial intelligence at present or in the near future. At the same time, the principle of judicial independence is emphasized, as well as the principle of the administration of justice exclusively by the court. Consequently, it is obviously too early to talk about the possibility of replacing a human judge in administering justice with such intelligence. The author believes that modern technologies can certainly facilitate the administration of justice, including by facilitating access to it for various people. At the same time, the rule, according to which such technologies should fully ensure the operation of generally known principles of justice, should be applied as a key principle of using modern technologies in legal proceedings.

In Place of an Introduction

Modern technologies are increasingly integrated into all spheres of human activity, and legal relations are no exception. Today, such relations are often created, modified, and concluded with the help of information technologies. Moreover, digital tools are actively employed to protect the subjective rights, freedoms, and legitimate interests of individuals and legal entities. While, initially, digitalization primarily affected the format of legal protection procedures (Gilles, 2011, pp. 41–54), it now clearly influences their very substance.

Some legal scholars view the spread of IT in the legal domain with great enthusiasm, While recognizing its potential benefits for both society and the individual, others caution that the universal integration of artificial intelligence into legal practice may lead to stagnation in legal development. They argue that excessive reliance on AI could weaken analytical thinking and obscure hidden threats within electronic justice systems (Kovler, 2022, pp. 5–29).

Today, the use of modern technologies in the protection of rights is, quite simply, a global trend. In many countries, IT is being used to improve the efficiency of legal professionals and help individuals address legal issues without direct legal assistance. Online Dispute Resolution (ODR) systems have emerged, aimed at reducing the need for court intervention and alleviating pressure on judicial systems. These platforms are modeled after traditional alternative dispute resolution methods such as arbitration, negotiation, and mediation. AI may serve either as an assistant to human arbitrators or act as a full replacement.

However, critics point to significant limitations. While artificial intelligence may be effective in routine matters, it struggles with complex disputes. Algorithm-based decision-making lacks empathy, contextual sensitivity, and ethical judgment — qualities that are essential for administering fair and humane justice (Nhemi, 2023).

Overview of the Positions of Supporters of Introducing AI into Legal Proceedings

A prominent procedural scholar and retired judge from the Ural school, I. Reshetnikova, asserts that artificial intelligence will inevitably transform legal proceedings. She argues that AI could be entrusted with predicting the likely outcome of a case, thereby assisting disputing parties in deciding whether to pursue litigation or reach a settlement. She also suggests that AI might be permitted to render interim decisions, with parties retaining the right to challenge them — after which the case would proceed in a traditional adversarial format with full procedural safeguards (Reshetnikova, 2024, pp. 30–41).

A. Neznamov, who shares the same academic tradition, contends that AI in the justice system is no longer the realm of science fiction but a realistic near- or medium-term prospect. Consequently, he argues that the focus should shift from debating whether AI should be used in judicial activity to examining the rules, principles, and challenges surrounding its implementation (Neznamov, 2024, pp. 90–106).

Neznamov proposes two models for the application of AI in Russian civil proceedings:

  • Surrogate Justice — in which artificial intelligence independently analyzes case facts and renders decisions under human procedural and technical supervision. He likens this model to summary proceedings, which already involve limited adversarial elements and follow a largely formalized structure.
  • Augmented Justice — where AI supports judicial work by providing analysis and draft solutions, while the final decision remains with a human judge. Neznamov considers this model particularly appropriate at the pre-trial stage (Neznamov, 2024, p. 95).

Neznamov also emphasizes that artificial intelligence could enhance consistency in judicial practice (Neznamov, 2024, p. 99).

Many other scholars, likewise, support the admissibility of AI in summary proceedings, regardless of whether decisions are rendered by a human or a machine, given that such formats lack fully adversarial hearings and typically involve largely uncontested claims.

Some scholars take this idea further. For example, V. Laptev proposes assigning all first-instance cases to AI, with appellate review reserved for human judges. He notes that if both levels were managed by AI, appellate findings would likely always align with those of the lower instance (Laptev, 2024, pp. 44–51). Notably, Laptev does not address the foundational principles on which such AI-based justice would be built.

Other researchers adopt a more cautious approach. Kurochkin, for instance, argues that only AI systems that meet rigorous legal standards and are developed in full compliance with civil procedural principles should be permitted in judicial practice (Kurochkin, 2024, pp. 42–74). Nonetheless, the practical mechanisms for such integration often remain undefined.

A different group of scholars views artificial intelligence as a catalyst for transforming the very principles of legal proceedings. Once again, Reshetnikova notes that a significant percentage (40–60%) of civil cases are now resolved without oral hearings — through summary, simplified, or default procedures. She concludes that the classical principle of orality in adversarial proceedings is gradually giving way to a written and digital formats (Reshetnikova, 2024, p. 38).

Some scholars occasionally propose new principles that, in their view, should underpin digital legal proceedings. For example, Samsonova (2022, pp. 77–128) highlights the importance of ensuring access to information about the progress and status of individual cases. Mironova (2021, p. 15) emphasizes the principle of informatization as a foundational element of modern justice.

A more nuanced position is offered by E. G. Streltsova, who acknowledges that digitalization may influence the transformation of existing procedural principles but does not, in itself, create entirely new ones. However, with regard to algorithmic decision-making, she argues that it must be governed by newly formulated fundamental principles — such as the principle of fair use of digital technologies. This principle would serve as a legal framework for determining whether the outcomes of interactions with digital systems should be upheld or annulled (Streltsova, 2024, pp. 75–89).

The discussion extends beyond civil procedure. Scholars in the fields of criminal procedure and forensic science also observe that digital technologies are reshaping the judicial process. For example, L. V. Bertovsky suggests that the introduction of AI will give rise to a new, hybrid theory of evidence. Under this model, evidentiary material obtained during investigations must be formalized — that is, converted into a machine-readable format. A human judge would then issue a decision based on personal conviction, informed by AI-generated recommendations and a ‘human-readable’ explanation of how those recommendations were formed (Bertovsky, 2022, pp. 8–12).

As noted earlier, numerous recent dissertations examine the legal implications of digitalization. Of particular interest is a 2024 thesis by Danil Olegovich Drozd, the key arguments of which are summarized below.

Drozd offers an illustrative analogy: the goals and objectives of legal proceedings represent the destination; the principles of justice are the rails that guide the process; and AI systems are the railway cars. These cars can only run on rails with which they are compatible. In other words, AI systems that do not align with the foundational principles of justice cannot, as a general rule, be used in legal proceedings.

Nonetheless, Drozd argues that it may be permissible to modify or depart from existing principles if doing so facilitates the achievement of the primary goals of legal proceedings. He contends that the objectives of legal proceedings take precedence over procedural principles, which serve merely as instruments to realize those objectives (Drozd, 2024, p. 83).

In his further analysis, Drozd examines the relationship between artificial intelligence and judicial discretion, as well as fundamental legal principles such as the administration of justice solely by the court and judicial independence (Drozd, 2024, pp. 97–175). He defines judicial discretion as the authority of a judge to resolve contested legal issues that are not explicitly regulated by statute, based on the overarching principles of the current legal system (Drozd, 2024, p. 106). Such discretion, he argues, must be firmly grounded in codified legal principles and must be clearly articulated in the court’s reasoning (Drozd, 2024, p. 111).

Because these principles are inherently abstract, modern AI systems — and even those anticipated in the near future — are incapable of resolving cases that require the exercise of genuine judicial discretion. However, AI can assist in determining whether a legal gap truly exists.

Accordingly, full adjudication by AI may be appropriate only when two conditions are met:

  1. The case is factually typical, such that no judicial discretion is required at the procedural stage
  2. There exists a clear and consistent judicial precedent in the relevant area of substantive law (Drozd, 2024, pp. 111–112, 115).

Regarding the principle of the administration of justice solely by the court, D. Drozd observes: “It may seem that if AI performs part of the judicial activity instead of a human, the said principle will automatically be violated” (Drozd, 2024, pp. 93–94).

However, he argues that the use of AI in judicial decision-making does not inherently violate this constitutional principle, provided that AI is integrated as a component of the judicial system and operates within the court as part of a unified procedural mechanism.

According to Drozd, the key issue lies in how AI is used:

  • If AI functions as a technical assistant to the judge — similar to court clerks or judicial aides — its role does not require substantial legislative revision, as the work of court personnel is currently subject to minimal regulation.
  • If AI is to play a more active role in the administration of justice, legislative amendments may be necessary. These would include the formal recognition of AI as an entity involved in delivering justice and its explicit inclusion among the participants in judicial decision-making (Drozd, 2024, p. 95).

Drozd also addresses the principle of judicial independence, which he defines as a condition in which: “courts and individual judges have the opportunity to act and make decisions freely from any external pressure, being confident in their protection from punishment of any kind” (Drozd, 2024, p. 126).

This definition implies that no judge — nor any external party, including actors outside the judiciary — should be able to influence decisions in specific cases or shape broader judicial practice.

Drozd offers a set of normative and institutional criteria for assessing whether judicial independence is maintained:

  1. Judicial independence and the structure of the court system must be protected by entrenched law, which is not easily subject to amendment.
  2. Judicial appointments must be free from monopoly and, ideally, made by a collective body involving various legal communities or branches of government.
  3. Judges should be appointed for life or until a fixed retirement age, with clearly defined rules for any potential revision.
  4. The removal or suspension of judges must follow complex procedures and be based on a closed list of grounds, excluding discretionary decisions by powerful individuals.
  5. Judicial budgets must be protected from political interference, and judicial salaries must be indexed to inflation and comparable to those in other legal professions.
  6. The assignment of cases to judges should be automatic or strictly regulated to prevent manipulation.
  7. Judicial proceedings must be fully transparent, including the publication of both majority and dissenting opinions. The reasoning provided in judgments plays a central role in ensuring accountability (Drozd, 2024, pp. 128–130).

In this context, the use of AI can present a risk to judicial independence, depending on how the system is built and trained.

Drozd outlines three possible training models for judicial AI:

  1. AI systems are pre-trained by developers to resolve legal problems according to predetermined rules.
  2. AI learns autonomously from legal practice by analyzing existing court decisions and doctrines.
  3. A hybrid model: AI is initially trained by developers and then continues to learn from case law and doctrinal sources (Drozd, 2024, p. 117).

The degree to which AI reflects or deviates from judicial independence will depend on the transparency, quality of sources, and regulatory oversight involved in its training and use.

D. O. Drozd acknowledges that in all three AI training models — whether developer-driven, practice-based, or hybrid — AI inevitably reflects external views “whether these are the opinions of the developers of the system itself or the opinions of the authors of academic works that make up the legal doctrine (Drozd, 2024, p. 118).”

However, in his view, this does not preclude the possibility of entrusting AI with judicial functions, nor does it inherently violate the principle of judicial independence. To reconcile these concepts, Drozd proposes adapting the traditional criteria of judicial independence for use in evaluating AI-based judicial systems.

For example, the second criterion, which concerns the procedure for judicial appointment, requires substantial reform in the context of AI. Rather than a traditional appointment process, the design and development of AI systems for the judiciary should be entrusted to a plurality of actors with divergent or even conflicting interests. This multipolar governance structure is essential for preventing monopolization and ensuring that no single entity controls the value framework embedded in AI’s functioning.

The possible composition of such a group of stakeholders can be inferred from the recommendations of the Council of Europe’s Commissioner for Human Rights on the use of artificial intelligence in the public sector. These recommendations emphasize inclusivity, transparency, and accountability in AI development, particularly when fundamental rights — such as access to justice and fair trial guarantees — are at stake.1

Regarding the final criterion of judicial independence, D. O. Drozd emphasizes that it will be satisfied only if the fact and scope of AI use in judicial proceedings are explicitly disclosed to all parties. This disclosure must occur both prior to the deployment of the AI system and be reflected in the final judicial decision (Drozd, 2024, p. 136). Furthermore, the reasoning behind AI-generated decisions should be made accessible to the extent technically possible. According to Drozd, this transparency can be achieved either through the system’s outputs or, where appropriate, through access to its source code.

Another crucial element of AI independence, in Drozd’s view, is the protection of such systems from covert interference. This includes unauthorized influence by technical staff, court employees, judges themselves, or any other actors with system access. Moreover, the decision-making logic must be safeguarded against unilateral modifications that could compromise the system’s neutrality (Drozd, 2024, pp. 136–140).

Let us contribute some reflections on the matter under discussion.

It is evident from the positions analyzed above that the authors agree on one key point: the influence of information technologies is bound to bring about fundamental changes in the very nature of legal proceedings. However, the extent of such changes envisioned by each author varies considerably.

Our analysis suggests that researchers who recognize the existence of justice not only in classical claim-based procedures but also in simplified proceedings — such as order-based processes — appear more willing to discard traditional procedural principles. Their reasoning seems to rest on the observation that summary proceedings operate without reference to core principles of civil procedure, and yet justice is still presumed to be administered.

If this premise is accepted, the logical conclusion would be that the application of such principles is not a necessary condition for the administration of justice. Following this line of reasoning, one could be led to justify the replacement of human judges by an AI system on the basis that the system’s rulings could still be considered lawful adjudication, regardless of their adherence to established procedural norms.

At the same time, some authors who acknowledge the significance of traditional principles nonetheless advocate for their partial revision — or even complete replacement — with new principles adapted to the digital context.

Let us now consider a different perspective.

We proceed from the assumption that true justice can only be administered through a procedure grounded in a specific system of guarantees. These guarantees are not arbitrary. They are designed to ensure the reliable establishment of facts, to protect against arbitrariness — whether by the court or external actors — and to safeguard the rights and interests of all legal subjects more effectively than any alternative dispute resolution mechanism.

The content and structure of this system of guarantees are not incidental. They have developed gradually over time through legal evolution, doctrinal refinement, and even social and political struggles. These guarantees have crystallized into what we now recognize as the principles of legal procedure, and their significance cannot be lightly dismissed — even in the face of digital innovation.

These guarantees primarily encompass the principles of legal proceedings, particularly those enshrined in Article 10 of the 1948 Universal Declaration of Human Rights,2 and now reflected in the constitutions of many countries. In this regard, we believe that the objectives of legal proceedings — above all, the protection of individual rights, freedoms, and legitimate interests through the administration of justice — can be achieved only when these principles are actively upheld and applied.

This does not imply that the protection of legal interests is unattainable by other means. Rather, it means that such protection, when exercised outside the framework of judicial procedure, does not constitute the administration of justice, and therefore lacks the legal and institutional consequences that derive from it. Accordingly, when non-judicial (i.e., extrajudicial) remedies prove inadequate, affected parties must retain the right to seek judicial protection. This protection is implemented within a specific system of procedural guarantees and represents the highest standard of legal redress.

It may even be argued that the public demand for this form of protection has not only established it as the central aim of legal proceedings but has also driven the development of the very system of principles designed to ensure its realization.3

In our view, a change in the principles of legal proceedings is permissible in only two cases:

(a) when the goals of legal proceedings are revised — particularly the primary goal outlined above — since new objectives will likely require corresponding means of implementation (i.e. new principles)

(b) when new principles are developed and substantiated that are more effective than existing ones in ensuring that the currently recognized goals of legal proceedings are reached.

Nevertheless, we believe that, to date, no truly convincing arguments have been presented in academic literature to support the emergence of new goals of legal proceedings that would replace the existing ones. Moreover, it is the pursuit of the current primary goal — the protection of subjective rights, freedoms, and legitimate interests — that produces a multifaceted effect, which is beneficial to society, individuals, and the state.4 For this reason, it is unlikely that this goal should or could be replaced, either now or in the future.

As for the so-called new principles of legal proceedings, we find the very rationale for their introduction highly questionable. For example, the frequently cited principle of the availability of information about legal proceedings in a specific case is already fully encompassed by established principles — namely, the principle of adversarial proceedings (insofar as it ensures access to case materials for the parties involved) and the principle of publicity (insofar as it guarantees transparency for the general public).

The same applies to the proposed principle of the fair use of digital technologies. It is unclear which specific goal of legal proceedings this principle is intended to support. If its purpose is to prevent violations of procedural rights through the use of digital tools, such protection is already afforded by the classical principle of legality, which in the procedural context stipulates that all participants in legal proceedings may act only within the bounds explicitly permitted by law.

If, on the other hand, the principle of fair use is intended as a refined or adjusted version of what is sometimes referred to in the literature as procedural fairness, then such a principle appears not only redundant, but potentially harmful. Procedural activity, by its nature, is aimed at protecting the subjective rights, freedoms, and legitimate interests of parties holding opposing legal positions. Accordingly, the exercise of any procedural right — even by a party acting in bad faith — still constitutes an exercise of the right to judicial protection. Restricting that right on the basis of subjective assessments of bad faith risks undermining the foundational principle of universal access to justice.

Moreover, unlike material civil relations, procedural relations always involve the court as an active participant, which ensures that the actions of other participants are subject to judicial oversight. It is this structural safeguard — rather than the adoption of a principle of good faith — that effectively prevents the abuse of procedural rights.

It could be argued that the principle of fair use of digital technologies is intended for a new kind of judicial process — one involving artificial intelligence or even procedural interactions without the participation of human judges. Indeed, some authors suggest the possibility of such models. To this, we respond: a serious academic debate on such forms of proceedings is only viable if, beyond hypothetical assertions, proponents can present persuasive evidence that these innovations would yield greater benefits for society, the state, and individuals than the current model of justice, which operates under universally recognized principles and with the mandatory involvement of a human judge.

Similarly, the so-called principle of information content cannot be regarded as a legal principle in the proper sense. Rather than serving as a normative foundation, it merely describes a feature of the legal process.

As noted earlier, some scholars such as D. O. Drozd have sought to adapt the existing principles of legal proceedings to accommodate digital tools, rather than abandon them altogether. This cautious and incremental approach is commendable. Nevertheless, in our view, Drozd’s proposals do not resolve the core issue: the fundamental incompatibility of the use of artificial intelligence and the foundational principles that make the administration of justice possible.

One of Drozd’s key assumptions is particularly problematic — namely, the notion that AI can be entrusted with cases that do not require judicial discretion. This position is difficult to support. The need for discretion may arise even in cases that appear routine on the surface, and recognizing that need requires a preliminary substantive analysis. If AI is assigned to a case before such analysis occurs, how can it reliably determine whether judicial discretion is required?

Therefore, any attempt to divide cases into those suitable for AI and those requiring a human judge carries the risk of misclassification, with AI inevitably handling cases in which discretion is, in fact, essential.

Further, Drozd’s proposal to use AI to identify gaps in the law is also questionable — particularly as a general or universal approach. While AI may be capable of flagging situations in which a legal norm appears to be absent, it cannot determine whether the gap truly reflects a legislative omission or merely an area that is not — and should not be — subject to regulation. Making this distinction requires the application of doctrinal categories such as the subject matter of a legal branch, the concept of legal regulation, and methods of legal regulation. These categories are more abstract and complex than even legal principles themselves, which Drozd acknowledges are too vague for AI to process. If AI cannot reliably apply legal principles, it is even less likely to comprehend and navigate the abstract logic that defines the boundaries of the legal system.

It is important to emphasize that courts must apply both substantive and procedural legal principles in every case. These principles are often essential for the correct interpretation of specific legal norms. Since legal principles are themselves norms of law, courts not only have the right to apply them — they are obligated to do so. This requirement further challenges the suitability of AI for adjudication.5 Even setting aside the aforementioned concerns, the AI model proposed by Drozd applies law in a narrowly positivist sense. Such a limited framework is unlikely to provide full protection to all parties involved. Today, the law must be approached in an integrative way, combining elements of legal positivism with natural law and sociological jurisprudence. Only through such a comprehensive approach can courts invalidate unlawful legislation, refuse to apply unjust norms, or interpret laws contra legem in order to protect violated rights.

Consider, too, Drozd’s approach to adapting legal principles for AI. At first glance, the modifications he proposes seem to preserve the essence of those principles while enabling AI to administer justice. In reality, this is not the case. Drozd defines compliance with legal principles using criteria that fail to fully reflect their true function. For example, his criteria for judicial independence overlook a key dimension: the judge’s capacity to oppose other branches of power. AI is unlikely ever to fulfill this role.

Judicial independence is grounded in distinctly human qualities — life experience, wisdom, empathy, and deep legal consciousness — not merely the ability to apply memorized rules. Critics may argue that not every judge possesses these traits. That may be true, but history shows that judicial weakness more often stems from external pressure than from the absence of capable judges. In many cases, it is precisely those in power who prevent such judges from being appointed.

It should also be noted that legal certainty — often cited as a benefit of AI due to its ability to ensure uniformity in judicial practice — is indeed important, but it is only truly valuable when it serves to uphold the living law. When uniformity leads to the repeated enforcement of flawed legal interpretations across all judicial instances, AI, unlike a human judge, cannot break this vicious cycle of systemic error. On the contrary, it will simply entrench and perpetuate it.

In this regard, the implementation of Drozd’s proposals will not, in fact, lead to a meaningful integration of the principle of judicial independence with AI. At best, it may allow for some degree of autonomous functioning — but this autonomy cannot begin to approach the significance or role of judicial independence within the system of justice.

D. Drozd’s proposal to adapt the principle that only courts administer justice to include AI is, in our view, unacceptable. This principle holds that only a court may render final decisions concerning legal rights and interests. The Constitution of the Russian Federation links this function to a specific system of legal principles. Any body — even one labeled a ‘court’ and composed of artificial intelligence — does not truly administer justice if it operates outside that system of principles.

Replacing human judges with AI is highly likely to violate one of the foundational principles of legal proceedings: judicial independence. In such a scenario, justice would not be properly administered by the so-called ‘court’.

As noted in a previous article, even if AI eventually acquires many human-like traits, it should not be entrusted with the administration of justice. Human knowledge develops through lived experience, social interaction, and emotional understanding. This personal dimension shapes a judge’s comprehension of people and society. AI — no matter how sophisticated — remains fundamentally different. Not every person can fully understand another’s interests, but only humans are capable of approximating true understanding of another person.

Moreover, if AI were ever to fully replicate human intelligence through accumulated social experience, the distinction between artificial and natural intelligence would begin to blur. Treating different intelligences differently based solely on their origin could raise serious ethical and legal concerns, including the risk of discrimination (Tumanov, 2024a, pp. 20–21).

We acknowledge that human judges are far from perfect. In many countries — especially those lacking strong democratic institutions — judges may, in certain cases, fail to maintain impartiality. However, this occurs not because the necessary safeguards for justice do not exist, but because they are not effectively implemented. Replacing human judges with AI would not solve this problem, as the root cause remains the same: governments and other powerful actors often lack the political will to support a strong, independent judiciary. Furthermore, as previously discussed, AI lacks the capacity to exhibit true independence in the way a human judge can.

In conclusion, we wish to emphasize one additional concern. The notion that AI might make a preliminary decision while a human judge renders the final ruling is problematic. Such an arrangement risks encouraging judges to rely uncritically on AI-generated conclusions without fully examining the evidence or the legal context. Decisions made in this manner cannot be regarded as the proper administration of justice.

Modern technologies require critical and cautious evaluation. Idealizing them risks judicial error. For example, many continue to believe that blockchain data cannot be falsified.6 Based on this assumption, some propose that blockchain-based evidence should be accepted in court without the need to check for unauthorized alterations. In fact, legislation could even be adopted to prohibit such verification.

However, experience has shown that blockchain technology is not as secure as initially assumed.7 If courts were to rely on its presumed infallibility to justify limiting verification, and hackers eventually found ways to manipulate blockchain data, this could result in judicial errors and wrongful decisions based on falsified evidence.

This demonstrates that technologies once believed to be flawless may later prove vulnerable. Idealizing such tools and assigning them special legal status may hinder the pursuit of justice rather than support it.

By contrast, many classical procedural rules remain robust and effective. For example, current law requires courts to evaluate evidence based on their personal convictions, reached through thorough, objective, and direct examination. No piece of evidence carries predetermined weight. Courts are obligated to assess the reliability of all evidence, even when it appears unaltered.

This approach makes it possible to call any piece of evidence into doubt and supports the principle of free judicial evaluation. It plays a critical role in uncovering the truth and protecting the rights and interests of the parties involved.

Many scholars have raised concerns about the potential for errors in artificial intelligence and the difficulty of detecting them, given the opaque nature of algorithmic processes. Yuval Noah Harari poses a critical question in this regard accompanying it with a vivid example of a court case: How would humans be able to identify and correct such mistakes? And how would flesh-and-blood Supreme Court justices be able to decide on the constitutionality of algorithmic decisions? Would they be able to understand how the algorithms reach their conclusions?

These are no longer purely theoretical questions. In February 2013, a drive-by shooting occurred in the town of La Crosse, Wisconsin. Police officers later spotted the car involved in the shooting and arrested the driver, Eric Loomis. Loomis denied participating in the shooting, but pleaded guilty to two less severe charges: ‘attempting to flee a traffic officer,’ and ‘operating a motor vehicle without the owner’s consent.’ When the judge came to determine the sentence, he consulted with an algorithm called COMPAS, which Wisconsin and several other U.S. states were using in 2013 to evaluate the risk of reoffending. The algorithm evaluated Loomis as a high-risk individual, likely to commit more crimes in the future. This algorithmic assessment influenced the judge to sentence Loomis to six years in prison — a harsh punishment for the relatively minor offenses he admitted to.

Loomis appealed to the Wisconsin Supreme Court, arguing that the judge violated his right to due process. Neither the judge nor Loomis understood how the COMPAS algorithm made its evaluation, and when Loomis a sked to get a full explanation, the request was denied. The COMPAS algorithm was the private property of the Northpointe company, and the company argued that the algorithm’s methodology was a trade secret. Yet without knowing how the algorithm made its decisions, how could Loomis or the judge be sure that it was a reliable tool, free from bias and error? A number of studies have since shown that the COMPAS algorithm might indeed have harbored several problematic biases, probably picked up from the data on which it had been trained.

In Loomis v. Wisconsin (2016) the Wisconsin Supreme Court nevertheless ruled against Loomis. The judges argued that using algorithmic risk assessment is legitimate even when the algorithm’s methodology is not disclosed either to the court or to the defendant. Justice Ann Walsh Bradley wrote that since COMPAS made its assessment based on data that was either publicly available or provided by the defendant himself, Loomis could have denied or explained all the data the algorithm used. This opinion ignored the fact that accurate data may well be wrongly interpreted and that it was impossible for Loomis to deny or explain all the publicly available data on him.

The Wisconsin Supreme Court was not completely unaware of the danger inherent in relying on opaque algorithms. Therefore, while permitting the practice, it ruled that whenever judges receive algorithmic risk assessments, these must include written warning for the judges about the algorithms’ potential biases. The court further advised judges to be cautious when relying on such algorithms. Unfortunately, this caveat was an empty gesture. The court did not provide any concrete instruction for judges on how they should exercise such caution. In its discussion of the case, the Harvard Law Review concluded that ‘most judges are unlikely to understand algorithmic risk assessments.’ It then cited one of the Wisconsin Supreme Court justices, who noted that despite getting lengthy explanations about the algorithm, they themselves still had difficulty understanding it.

Loomis appealed to the U.S. Supreme Court. However, on June 26, 2017, the court declined to hear the case, effectively endorsing the ruling of the Wisconsin Supreme Court. Now consider that the algorithm that evaluated Loomis as a high-risk individual in 2013 was an early prototype. Since then, far more sophisticated and complex risk-assessment algorithms have been developed and have been handed more expansive purviews. By the early 2020s citizens in numerous countries routinely get prison sentences based in part on risk assessments made by algorithms that neither the judges nor the defendants comprehend. And prison sentences are just the tip of the iceberg.” (Harari, 2024, chapter 9).

In Place of a Conclusion

We believe that, both now and in the foreseeable future, the replacement of human judges with artificial intelligence remains impermissible. Beyond the judicial sphere, numerous alternative mechanisms exist for protecting rights, freedoms, and legitimate interests — mechanisms in which AI may well serve as the primary decision-making tool. However, the administration of justice is inconceivable without foundational principles that guarantee its legitimacy — some of which are inherently incompatible with the nature and functioning of AI. Accordingly, individuals seeking protection must, at a minimum, retain the right to appeal to a court for adjudication.

The occasional failure to apply certain judicial principles in practice does not reflect their obsolescence or diminished social relevance. Rather, it points to either: (1) a lack of understanding by authorities of the essence and societal function of justice, or (2) a deliberate disinterest in fostering an independent judiciary capable of protecting all subjects of law — including against abuses committed by other branches of power. The instruments necessary for delivering substantive justice have long existed — what remains essential is their consistent and principled application.

Innovation must not come at the cost of abandoning civilizational achievements of enduring value, nor should it promote the substitution of established legal safeguards with technologies whose social utility remains uncertain.

We fully acknowledge the benefits that information technology offers to legal proceedings — particularly in enhancing access to justice — but we consider it unacceptable to replace judicial guarantees with technological solutions. In this regard, we propose that the guiding principle for the use of IT in judicial processes — what might be called the principle of digitalization of proceedings — should be the following: its application must serve exclusively to support and uphold the core principles that are indispensable to the proper administration of justice.

1. Commissioner for Human Rights. (2019). Unboxing artificial intelligence: 10 steps to protect human rights. Council of Europe. https://rm.coe.int/unboxing-artificial-intelligence-10-steps-to-protect-human-rights-reco/1680946e64

2. GA Res. 217 (III) A, Universal Declaration of Human Rights, art. 10 (Dec. 10, 1948).

3. It should also be noted that only a small number of authors reflecting on the topic of new principles also touch upon the issue of the goals of legal proceedings, although the indicated problems are inseparably linked.

4. For a detailed justification of this, see: Tumanov (2024, pp. 153–172).

5. An exception, in our view, is the issuance of court orders, which AI can handle. Writ proceedings are not true justice administration, as many legal principles do not apply to them (see Tumanov, 2024, pp. 150–153). Court orders rely on documents that clearly confirm the debt. Such orders can be canceled through a simplified process and do not prevent filing a claim, since unlike court decisions, they lack exclusive legal force. Additionally, AI could verify debtor signatures on documents by comparing them with samples in databases (e.g., the State Services system), reducing fraud risks.

6. This thesis is sometimes used by individual authors to base their conclusions on the benefits of using blockchain in jurisdictional activities. See, for example: Mukhtarova (2024, pp. 75–79).

7. The experts explained the unhackability of the blockchain as follows. For example, the Bitcoin blockchain is controlled by many computers that are distributed around the world. The open nature of the blockchain Bitcoin allows anyone with the underlying infrastructure to become a miner or run a validator node. If one miner is compromised, thousands of other miners will prevent the hacker from changing the data on the blockchain. For an attacker to hack a public blockchain like Bitcoin, they must gain control of at least 51% of the total number of Bitcoin nodes running. This is because the Bitcoin blockchain follows the longest chain rule: the longest version of the Bitcoin blockchain is considered correct. A hacker would have to accumulate enough computing power to overtake this chain, which will typically have the most computing power and grow the fastest. Therefore, hacking a blockchain that has a large and diverse group of miners is extremely difficult and unprofitable for hackers. This is the main reason why a popular blockchain like Bitcoin was thought to be unhackable. However, the concept that any blockchain at all cannot be hacked has been disproved. In 2019 and 2020, hackers used the PoW chain under the name Ethereum Classic (ETC) by gaining control over 51% of the chain nodes. The hackers managed to “double” their ETC tokens by creating illegal transactions. In addition, it is noted that quantum computing poses a threat to the blockchain. Blockchain communities are actively researching and developing quantum-resistant algorithms to mitigate this risk. See: Lepcha, M. (2024, January 2). Why can’t blockchain be hacked? Techopedia. https://www.techopedia.com/experts/why-cant-blockchains-be-hacked

References

1. Bertovsky, L. V. (2022). Teorii otsenki dokazatel’stv: Nazad v budushcheye [Theories of evidence evaluation: Back to the future]. In L. V. Bertovsky, S. M. Kurbatova, E. A. Erakhtina, & A. G. Rusakov (Eds.), Aktual’nyye voprosy rossiyskogo sudoproizvodstva: Dokazyvaniye s ispol’zovaniyem sovremennykh tekhnologiy: Materialy Vserossiyskoy (natsional’noy) nauchno-prakticheskoy konferentsii [Current issues in Russian judicial proceedings: Evidence using modern technologies: materials from the All-Russian (national) scientific and practical conference] (pp. 8–12). Natsional’niy issledovatel’skiy universitet “Moskovskiy institut elektronnoy tekhniki”; Krasnoyarskiy gosudarstvennyy agrarnyy universitet. 8–12.

2. Drozd, D. O. (2024). Protsessual’nyye formy ispol’zovaniya elementov iskusstvennogo intellekta v sovremennom arbitrazhnom i grazhdanskom sudoproizvodstve [The procedural forms of the use of artificial intelligence elements in the modern commercial and civil litigation] [Doctoral dissertation, HSE University]. http://www.hse.ru/data/xf/172/312/2084/Диссертация.pdf

3. Galkovskaia, N. G., & Kukartseva, A. N. (2024). Otsenka perspektiv i riskov ispol ‘zovaniia iskusstvennogo intellekta v sfere pravosudiia [Assessment of prospects and risks of using artificial intelligence in the field of justice]. Vestnik Grazhdanskogo Protsessa [Herald of Civil Procedure], 14(2), 257–282. https://doi.org/10.24031/2226-0781-2024-14-2-257-282

4. Gilles, P. (2011). Elektronnoye sudoproizvodstvo i printsip ustnosti [The electronic process and the principle of orality]. Rossiyskiy yuridicheskiy zhurnal [Russian Juridical Journal], (3), 41–54.

5. Harari, Y. N. (2024). Nexus: A brief history of information networks from the Stone Age to AI. Fern Press.

6. Kazikhanova, S. S. (2023). Web conference as a modern model of interaction between the court and participants in the process: shortcomings of legal regulation and prospects for use. Courier of Kutafin Moscow State Law University (MSAL), (10), 79–87. https://doi.org/10.17803/2311-5998.2023.110.10.079-087

7. Kondyurina, Yu. A. (2020). Printsipy tsivilisticheskogo protsessa v sisteme elektronnogo pravosudiya [Kondyurina, Yu. A. (2020). Printsipy tsivilisticheskogo protsessa v sisteme elektronnogo pravosudiya [Principles of civil procedure in the e-justice system] [Summary of doctoral dissertation, Saratov State Law Academy]. Russian State Library. https://viewer.rsl.ru/ru/rsl01010253338?page=1&rotate=0&theme=black

8. Konstantinov, P. D. (2022). Vliyaniye informatsionnykh tekhnologiy na printsipy grazhdanskogo protsessa (sravnitel’no-pravovoye issledovaniye na primere Rossii i Frantsii) [The impact of information technology on the principles of civil procedure (a comparative legal study based on the examples of Russia and France)] [Abstract of doctoral dissertation, Ural State Law University named after V. F. Yakovlev]. Russian State Library. https://viewer.rsl.ru/ru/rsl01011374800?page=1&rotate=0&theme=black

9. Kovler, A. I. (2022). Antropologiya prav cheloveka v tsifrovuyu epokhu (opyt sravnitel’nogo analiza) [Anthropology of human rights in the digital age (the essay of the comparative legal method)]. Journal of Russian Law, 26(12). 5–29. https://doi.org/10.12737/jrl.2022.125

10. Kurochkin, S. A. (2024). Iskusstvennyy intellekt v grazhdanskom protsesse [Artificial intelligence in civil procedure]. Vestnik Grazhdanskogo Protsessa [Herald of Civil Procedure], 14(2), 42–74. https://doi.org/10.24031/2226-0781-2024-14-2-42-74

11. Laptev, V. A. (2024). Iskusstvennyy intellekt v sude — odna instantsiya: Na puti razvitiya tsifrovogo pravosudiya [Artificial intelligence in court — one instance: On the way to the development of digital justice]. Rossiyskiy Sud’ya, (11), 44–51. https://doi.org/10.18572/1812-3791-2024-11-44-51

12. Lukonina, Yu. A. (2023). Tsifrovaya tsivilisticheskaya protsessual’naya forma: teoretiko-prikladnyye aspekty [Digital civil procedural form: theoretical and practical aspects] [Doctoral dissertation, Saratov State Law Academy].

13. Lukyanova, I. N. (2020). Razresheniye sporov onlayn: tekhnologichnyy put’ k «privatizatsii pravosudiya» [Online dispute resolution: a technological path to the “privatization of justice”]. Zakony Rossii: Opyt, Analiz, Praktika, (8), 45–48.

14. Mironova, Yu. V. (2021). Realizatsiya printsipov grazhdanskogo protsessual’nogo prava pri ispol’zovanii sistem video-konferents-svyazi [Implementation of civil procedural law principles when using video conferencing systems] [Doctoral dissertation, Saratov State Law Academy].

15. Mukhtarova, O. S. (2024). Blokcheyn v yurisdiktsionnoy deyatel’nosti arbitrazhnykh sudov [Blockchain in the jurisdictional activity of arbitration courts]. Zakony Rossii: Opyt, Analiz, Praktika, (10), 75–79.

16. Neznamov, A. V. (2024). Iskusstvennyi intellekt, edinoobrazie sudebnoi praktiki i tvorcheskii kharakter sudebnoi deiatel’nosti [Artificial intelligence, uniformity of judicial practice and the creative nature of judicial activity]. Vestnik Grazhdanskogo Protsessa [Herald of Civil Procedure], 14(2), 90–106. https://doi.org/10.24031/2226-0781-2024-14-2-90-106

17. Nhemi, S. (2023). Law without lawyers: Examining the limitations of consumer-centric legal tech services. Journal of Intellectual Property and Information Technology Law, 3(1), 15–76. https://doi.org/10.52907/jipit.v3i1.223

18. Reshetnikova, I. V. (2024). Iskusstvennyy intellekt v arbitrazhnom protsesse: Vozmozhnyye sfery primeneniya [Artificial intelligence in arbitration procedure: Possible areas of application]. Vestnik Grazhdanskogo Protsessa [Herald of Civil Procedure], 14(2), 30–41. https://doi.org/10.24031/2226-0781-2024-14-2-30-41

19. Samsonova, M. V. (2022). Kommunikatsiya uchastnikov protsessa s sudom i mezhdu soboy [Communication between parties and the court, and between parties themselves]. In E. G. Streltsova (Ed.), Tsifrovyye tekhnologii v grazhdanskom i administrativnom sudoproizvodstve: Praktika, analitika, perspektivy [Digital technologies in civil and administrative proceedings: Practice, analytics, prospects] (pp. 77–128). Infotropic Media.

20. Streltsova, E. G. (2024). O pravootnosheniyakh v tsivilisticheskom protsesse pri primenenii tsifrovykh tekhnologiy [On legal relations in civil procedure in the application of digital technologies]. Vestnik Grazhdanskogo Protsessa [Herald of Civil Procedure], 14(2), 75–89. https://doi.org/10.24031/2226-0781-2024-14-2-75-89

21. Tumanov, D. A. (2024a). Neskol’ko slov o smysle grazhdanskogo sudoproizvodstva i yego printsipakh [A few words about the meaning of civil proceedings and its principles]. Zakony Rossii: Opyt, Analiz, Praktika, (2), 13–22.

22. Tumanov, D. A. (2024b). Zashchita obshchestvennykh interesov v grazhdanskom sudoproizvodstve [The defense of public interests in civil judicial proceedings] [Dr. Sci. Dissertation, MGIMO-University]. https://mgimo.ru/upload/diss/2024/tumanov-diss.pdf

23. Wang, N. (2020). “Black box justice”: Robot judges and AI-based judgment processes in China’s court system. In Proceedings of the 2020 IEEE International Symposium on Technology and Society (ISTAS) (pp. 58–65). IEEE Xplore. http://dx.doi.org/10.1109/IST AS50296.2020.9462216

24. Zhuhao, W. (2021). China’s e-justice revolution. Judicature, 105(1), 37–47.


About the Author

D. A. Tumanov
Russian Foreign Trade Academy
Russian Federation

Dmitry A. Tumanov - Dr. Sci. in Law, Professor, Procedural Law Department

6A, Vorobiyovskoye Highway, Moscow, 119285



Review

Views: 411


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 2686-9136 (Online)