Preview

Digital Law Journal

Advanced search

Critical view of the introduction of artificial intelligence in the administrative procedure: Reception of the EU AI Act in Italy

https://doi.org/10.38044/2686-9136-2025-6-2-8-27

Contents

Scroll to:

Abstract

This article offers a critical examination of Italy’s initial legislative efforts to transpose the European Union’s Artificial Intelligence Act. Employing a methodological approach of argumentative-critical commentary, the analysis dissects the bill’s text to uncover the Italian government’s underlying philosophy and regulatory strategy. The study argues that the proposed framework suffers from a significant lack of overall vision, mistakenly treating AI as a non-penetrating technological tool and reducing its risks primarily to data privacy concerns, all while prioritizing industrial growth. The article critiques key provisions, including the ambiguous “anthropocentric” principle, the superficial regulations governing AI in information, healthcare, intellectual professions, and the judiciary, and the creation of new criminal offences. It further analyzes preliminary projects for integrating AI into administrative justice and explores the profound procedural implications of extending AI into areas of technical discretion, highlighting the potential for a paradigm shift in judicial review. By juxtaposing the Italian approach with a documented case of algorithmic discrimination in the UK, the research underscores the concrete risks to fundamental rights and the rule of law. The conclusion emphatically calls for a more technically competent and critically aware involvement from legal scholars and practitioners to safeguard citizens from the uncritical and pervasive adoption of AI in public administration and justice, warning against the shortcomings of the current fragmented and constitutionally superficial regulatory proposal.

Introduction

This article offers a preliminary examination of how the EU AI Act1 has been introduced into Italy’s legal and administrative system. It analyses Bill No. 1066AS, entitled “Regulations for the development and adoption of artificial intelligence technologies’,2 from a methodological perspective. According to the government, this bill is intended to be the first in a series of measures designed to implement the mandates that must be fulfilled within twelve months.

The analysis proceeds thematically, seeking to understand how the Italian government has interpreted and regulated them, based on the issues addressed in the legislative text. Methodologically, this is therefore an argumentative and critical commentary. Accordingly, the regulatory references are integrated into the text rather than placed in a bibliography, and the analysis is confined to the decree itself, without exploring the individual topics it raises, as each of them would require a separate article, if not a monograph. The aim is to outline from the outset the general level of critical concern, thereby encouraging specialized reflection on each of the specific issues identified both for research and analytical purposes and to suggest possible improvements in the relevant technical domains.

Overall, the analysis reveals a lack of strategic vision. The decree treats AI as a simple, non-invasive technological innovation, reduces risk considerations primarily to privacy, and presents the regulatory process chiefly as a driver of industrial growth.

Neither the form chosen for the text—issued directly by the government rather than examined by an ad hoc parliamentary committee with broader participation—nor the adopted solutions, which often reflect the European regulations uncritically and without sufficient detail, address these specific risks in a meaningful way, particularly in the context of a constitutionally oriented rule of law.

Italian Standards or Developing and Adopting Artificial Intelligence Technologies

Italy is a founding member of the European Union and, like all member states, must transpose EU regulations into its own legal system. On 23 April 2024, the Council of Ministers approved Bill No. 1066AS, entitled “Regulations for the development and adoption of artificial intelligence technologies”.3 According to the government, this bill is the first in a series of specific measures to be implemented over the next twelve months.

Responsibility for developing artificial intelligence systems is assigned to a strategy prepared by a special unit within the Prime Minister’s Office, in cooperation with the national authorities responsible for technological innovation, the Ministry of Enterprise and Made in Italy, and the Ministry of Universities. This institutional design reflects the government’s intention to play a central role in the development of AI systems. This is further demonstrated by the decision to entrust AgID (Agency for Digital Italy) and the National Cybersecurity Agency—both technical bodies under the Prime Minister’s Office—with applying national and European regulations.

However, the risk is that an overly national approach will clash with Italy’s technological and research-investment gap. In the author’s view, it would have been preferable for a program that exercises close supervision and coordination of development, while also regulating the use of AI and data management, to be oriented towards specialized collaboration. Concrete models exist: CERN, Airbus, or the European Space Agency, all of which demonstrate effective, efficient, and goal-oriented organizational structures. The not-so-subtle concern is that this process of nationalization and centralization may conceal an eagerness to control and distribute the substantial resources expected to flood the sector, including for patronage and electoral purposes.

The government’s strategy for AI development also includes venture capital activities by the Ministry of Enterprise and Made in Italy, involving the acquisition of stakes of up to €1 billion in SMEs operating in new technologies, quantum computing, or telecommunications, and demonstrating high growth potential.

Curiously, the government has chosen to prioritize suppliers operating on Italian territory rather than those more broadly within the European market.4

Particularly noteworthy is the definition contained in Article 1 of the Bill on “Provisions and powers delegated to the Government regarding artificial intelligence”,5 which states that the bill aims to promote “the correct, transparent, and responsible use of artificial intelligence, within an anthropocentric framework, aimed at seizing its opportunities.”

The “Anthropocentric” Approach

This conception of artificial intelligence is articulated in Article 3, which states that the use of this technology must respect the fundamental rights laid down in the Constitution6 and in European law, as well as the principles of transparency and proportionality of processes, while ensuring the reliability and accuracy of the data used to develop AI systems. Yet, beyond the ambiguity of the term “anthropocentric”—a notion whose meaning is unclear, since every human-made tool is anthropocentric by definition (even the ergonomics of a hammer is designed for human use)—Article 3 raises several concerns.

Substantively, the article’s purpose is not clear. Having established that an activity must not be “contrary to the fundamental rights provided for by the Constitution and European law and also to the principles of transparency and proportionality of the processes, reliability and correctness of the data used for the development of artificial intelligence systems,” nothing is said about how exactly these rights, which are already established and inalienable, are to be defended, enhanced, and protected.

Given that such rights are becoming increasingly shaky, if not reduced to mere statements of principle in an era marked by liberalism and globalization, where the core infrastructures of AI are controlled predominantly by non-European multinationals,7 it might have been appropriate to dedicate more attention to active, preventive, and precautionary protection measures, rather than simply asserting that “everything must be controlled by the government.”

The Bill then goes on to outline a series of principles to be applied across various sectors: information, economic development, healthcare, professional services, the judiciary, user protection, copyright, and, finally, criminal law.

AI and Information

Article 4 addresses the use of AI systems in the information sector, providing for the protection of media pluralism and the democratic nature of the press. It also requires that information on the processing of personal data be clear and accessible to everyone, so that individuals may exercise their right to object to the sharing of their data. However, the remedies, instruments, and mechanisms available for safeguarding these rights are neither specified nor explained. A quick comparison between the speed of AI systems and the sluggishness of Italy’s civil justice system perfectly illustrates the inadequacy of the current protections.8 The real source of uncertainty, however, lies in the lack of clarity regarding the nature of the issue itself.

Artificial intelligence is a system of hyper-computation combined with machine-learning technologies. To function, it must continuously process vast quantities of data. Without such data processing, it is difficult to understand how generative AI could evolve, as these systems obtain data from across the web. Since the most authoritative websites, including major and reputable broadcasters, have restricted access to their content, generative AI models (such as ChatGPT) often rely on platforms like Quora or Facebook* as sources. This is not an abstract concern, but a concrete reality.

Against this backdrop, the generic statement in Article 5, which stipulates that companies operating in the sector must be granted access to “high-quality data,” is insufficient. The bill does not regulate how such access will be ensured, nor does it define what constitutes “high quality”—a concept that is not easily determined in advance. Moreover, this provision cannot override intellectual property rights, nor violate industrial or research-related copyright protections.9

Article 23 provides that, within regulation of audiovisual services, subject to the acquisition of consent from rights holders, content created or modified through AI systems “that are capable of presenting as real data, facts and information that are not”10 must be marked with an appropriate identification mark. This provision responds to the concerns generated by so-called “deepfakes”. The obligation of transparency is placed on the author or holder of the rights to such content, although the practical allocation of responsibility will depend on the implementing measures. The insertion of the identifying sign is excluded when the content forms part of a manifestly creative, satirical, artistic, or fictitious work or program, without prejudice to the protections for the rights and interests of third parties. The resolution of doubts as to the manner of implementation is left to self-regulatory and co-regulatory soft-law sources.

The government aims to foster the development of Italian entrepreneurial fabric; thus Article 5 stipulates that the state and other public bodies:

  • promote the use of AI in production processes to improve human–machine interaction and increase productivity
  • foster the development of an innovative and open Italian artificial intelligence market
  • ensure that companies operating in the sector have access to high-quality data
  • direct the digital procurement platforms used by public administrations so that preference is given to solutions that guarantee the localization and processing of critical data in data centers located throughout the country.11

No specific direction emerges, apart from reference to an “anthropocentric model that respects the Constitution.” Beyond this, and aside from an explicitly 19th-century form of protectionism, the Bill’s primary thrust appears to be the orientation of AI towards the technological transformation of production enterprises.

AI and Health

With regard to healthcare, the use of AI systems must not lead to discriminatory criteria in access to treatment and medical services. The principle of informed consent is reflected in the requirement to inform patients about the use of artificial intelligence systems. This is fine. But all of this is already provided for by general rules consolidated through extensive case law.12 What is missing is any consideration of how to handle and protect the product generated by this use.

The issue first emerged in relation to metadata. Google, paradoxically and dramatically, states the truth when it refuses to provide users with much of the information requested about their own accounts: Google does not actually know certain things because once transformed and reassembled, the data becomes an articulated and amorphous metadata set in which the individual is no longer traceable.13 The essential point was to standardize and protect metadata—a challenge that has not been met. An AI system may certainly require informed consent to acquire the first raw data that originate information, and existing norms already regulate such access.14

However, AI is not a cloud in which files are simply stored in separate and recognizable folders. For example, an AI analyzing one million ultrasound scans per month in Italy could decompose the data in multiple ways: a gynecologist might search only for the fetus’s vital parameters, but a future AI system could extract macro-data on the microcirculation of the uterine wall. Twelve million scans per year of anonymized data can reveal an enormous amount of information.

Who manages and protects this data, and how? What about the economic dimension, given that pharmaceutical companies, medical device manufacturers, and others could make enormous profits from it? Who is entitled to access this data, or to request that AI analyze it for research purposes? For what objectives?

The problem here is not merely a lack of regulation that could be solved by an amendment. Rather, the issue is that the very nature of what must be regulated is not understood. At the root, it is not even clear what one is talking about.

AI, Intellectual Professions, and Copyright

The regulation of AI use is also crucial in the intellectual professions, where AI systems may be employed only to carry out activities that are instrumental and supportive of the professional task, and where the intellectual work performed by the professional must remain predominant. To preserve the relationship of trust between the professional and the client, information about artificial intelligence systems used must be communicated to the recipient of the service in clear, simple, and comprehensive language.

This immediately raises questions about the meaning to be attributed to “instrumental and support activities” and about how to measure the “prevalence” required by the rule.15 It should be recalled that professionals compete with colleagues in other jurisdictions and with service companies that would not be subject to these limitations. The scarcely concealed assumption underlying this framework is that a professional commissioned to write a text merely sits in an armchair, submits a question to ChatGPT, and lets the system do the work. It is the anthropocentrism of a fraudulent human logic.

But how does this apply to designing a bridge or a tunnel, where all the calculations, virtual simulations, 3D processing, and structural analyses, including those of each individual component, are already performed by machine-learning systems with hyper-computational capacity?16 Because this is already happening.

Since the use of artificial intelligence frequently intersects with user protection in audiovisual and radio services and with copyright matters, Article 2317 of the bill states that any information content broadcast on any type of audiovisual or radio platform that has been entirely created or modified with AI systems must be made recognizable to users by the author or the owner of the economic exploitation rights through an identification sign or an embedded marking with the wording “I.A.”.18 With regard to copyright protection, Article 24 of the bill provides for the amendment of Article 1 of Law No. 633/1941, according to which works of the mind protected by that law also include those created with the use of artificial intelligence, where the human contribution is in any case “creative, relevant and demonstrable.”19

Here again, anthropocentrism yields to a kind of “suspicion of man”. Yet clear principles already exist and may be borrowed, for instance, from common-law systems which, for obvious historical reasons, have dealt with relevant cases for much longer. Copyright depends on the threshold of originality based on three assumptions: skill (creative competence), labor (commitment, work), and judgement (the ability to discern, distinguish, and select the distinctive elements of the work).20

In extremely concise terms, using this tripartite parameter, which must exist simultaneously, with each element at least equivalent, if not quantitatively comparable, Anglo-American jurisprudence recognizes that works produced with AI systems exhibit skill and labor. What is missing is judgement, which belongs exclusively to the human being.

These are principles of common sense, not as excessively flexible as they might seem, yet it appears evident that the authors of the provision failed to grasp the essential point.

Undersecretary Butti has clarified that, in the field of copyright law, competence lies with the EU, and EU rules already regulate the protection of content in cases of web scraping, establishing an opt-out mechanism for rights holders.21

The bill introduces two novelties and a reminder. The first novelty is the requirement of an identification mark for any content made or modified with AI. The second is copyright protection for works produced with AI, provided that the human contribution is creative, relevant and demonstrable. And the reminder is a reference to copyright rules on the reproduction and extraction of works or other materials through artificial intelligence models and systems, including generative systems, where permitted for research purposes or when the use of such materials has not been expressly reserved by copyright holders, related rights holders or database owners.

But here, practically speaking, there is nothing new or innovative. This has been expected for 70 years in relation to any printed work and always expressly stated in their disclaimers.

AI and Law

In the judiciary, artificial intelligence systems may be used exclusively for the organization and simplification of judicial work, as well as for jurisprudential and doctrinal research. The Ministry of Justice will regulate the use of artificial intelligence systems by ordinary judicial offices. For other jurisdictions, use will be regulated in accordance with their respective systems. The interpretation of the law, the assessment of facts and evidence, and the adoption of decisions remain the exclusive prerogative of the magistrate. The Civil Court will be the competent body to decide disputes relating to the operation of AI systems.

However, a fundamental uncertainty remains: if AI systems are employed in evidentiary activities, in court-appointed technical assessments (CTUs), in party-appointed technical reports (CTPs), in reconstructions, or in investigative processes, to what extent can the resulting judicial measure be considered “AI-free”? Above all, despite vast academic debate, there is no limitation or reference to criteria governing the programming of algorithms in judicial contexts, particularly for the purposes of verifying potentially discriminatory or prejudicial decision-making.

The government also intervenes in criminal matters, penalizing the distorted use of AI systems capable of harming legally protected interests such as the moral integrity of individuals.

A new offence is introduced by Article 612-quater of the Criminal Code, which punishes:

“Whoever, in order to cause damage to a person and without his consent, sends, delivers, assigns, publishes or in any case disseminates his image, video or voice, falsified or altered through the use of artificial intelligence systems and capable of misleading as to their genuineness”22 with imprisonment of six months to three years, or from one to five years if unfair damage results from the act.

The offence is punishable upon complaint by the injured party, but prosecution proceeds ex officio if the act is linked to an offence prosecuted ex officio, or if committed against a person incapacitated due to age or infirmity, or against a public authority by reason of the functions exercised. Further aggravating circumstances concerning the use of artificial intelligence are also introduced in Articles 61(11-novies), 494 and 501 of the Criminal Code.23

More broadly, the use of AI systems becomes an aggravating circumstance for several offences, including substitution of person, fraudulent price manipulation, fraud, computer fraud, money laundering and self-laundering, and market rigging. Copyright protection is extended to cover reproduction or extraction of text or data from works or other materials on networks or in databases in violation of Articles 70-ter and 70-quater, including through AI systems.24

In this respect, the measure aligns with a long-standing governmental tradition whereby almost any regulatory intervention, even incidentally, affects criminal law.

Given the particular sensitivity of criminal law, this approach would require greater caution, not least in order to maintain the stylistic rigor of the Criminal Code, the primary instrument for guaranteeing the principle of “knowledge and discernment” of the criminal law as a prerequisite for awareness of wrongdoing.

Yet through the accumulation of bis, ter, quater articles and “x-nonies” paragraphs, the Italian Criminal Code has more than tripled in volume without contributing to a reduction in offences, with the exception of those abolished or converted into administrative violations. Instead, this legislative inflation produces less clarity, reduced intelligibility, and diminished legal certainty. Expanding the Criminal Code increases the risk of conflicts among norms, uncertainty in practical application, overlaps among offences, whether objective or interpretative, and broadens the margins of judicial discretion. These characteristics are ill-suited to Article 27 of the Constitution25 and incompatible with a penal system that is truly guarantor-oriented, civil, and at least functional, if not effective.

There was no compelling need for Article 612-quater. The conduct could have been typified within Article 612,26 and if a different penalty range was desired, that provision could have been amended. Treating the use of AI—a tool—as an aggravating element of punishment is more a political manifesto than an effective instrument of criminal policy. It is akin to holding that the crime of massacre under Article 422 of the Criminal Code27—one of the few offences without an “attempt” form, since planting a device, even without casualties, constitutes the offence—would be more serious if committed with a bomb rather than a machine gun, and less serious if committed with a pistol.

In criminal law, which, let us recall, directly affects personal liberty, the decisive factors should be “the fact” and the resulting harm to the protected right, serving as both aggravating and mitigating elements. Criminal law should be parsimonious, but this would run the serious risk of also becoming clear, certain, and stable, like any self-respecting criminal law.

Yet the entrenched vice of weak politics is to intervene strongly in criminal matters through legislative proclamations: a shortcut for claiming to have taken concrete action. Such concreteness is reified in a few lines of statutory text. And this is not the vice of a single political camp, but a bipartisan legislative habit.

AI in Training, Education, and Research

The bill also provides for the enhancement of customized teaching plans for students with high cognitive abilities through the integration of AI training, as well as for the use of AI systems to improve psychophysical well-being through sporting activity, including the development of innovative solutions aimed at greater inclusion of persons with disabilities in sport.

If this has escaped the reader’s notice, let us make it explicit: AI is to be used to support customized educational plans for students with high cognitive abilities, whereas for people with disabilities, it is intended to generate innovative solutions for greater inclusion in sporting activities.

Is the underlying cultural model clearer now?

This formulation entirely overlooks decades of groundbreaking studies on the use of AI (today) and information technology (yesterday) for the cognitive development and support of differently abled, and indeed disabled, individuals.

Subject to the general principles set out in the bill, artificial intelligence systems may also be used for the organization of sports activities. With respect to AI systems and minors, the government introduces a series of limitations: for minors under 14, parental consent is required. Minors between 14 and 18 may consent to the processing of personal data related to AI use, provided that the information and communications (privacy notices) are easily accessible and understandable (sic!). More generally, information about data-processing must be drafted in clear and simple language to ensure that users are fully aware of, and able to object to, improper processing of their personal data.

However, we were unable to achieve this with Google, Facebook,* and similar platforms regarding personal data; we completely failed to do so with metadata; and now we expect to apply these principles to AI, which has a computational capacity that is orders of magnitude greater? Clear and simple language is indeed desirable, but such clarity requires at least a basic understanding of the phenomenon in question. Here, that understanding is manifestly absent.

In what clear and simple language should it be explained that your data is taken from you, full stop, and that AI systems ultimately care little about whether they belong to Mario Rossi or Maria Bianchi? Names are merely extraneous elements that burden the process. This data will be broken apart, sifted, recombined, reshaped, and recontextualized. It is like demolishing three houses to construct one new building, using selected pieces while anonymizing their origins. That is how the process should be explained, and precisely how it will not be explained.

Article 8 appears to attempt to remove some of the obstacles imposed by the current legal framework on the use of health data for scientific research.28 Among other things, it allows not-for-profit entities to make secondary use of personal data, including that in special categories, which has already been collected in previous research projects without obtaining additional consent from the data subjects.

Although this text will be debated and amended in Parliament, it already reveals potential incompatibilities with the provisions of the GDPR, as well as a regulatory gap concerning scientific research conducted by profit-making organizations. There is also a coordination issue with the provisions on scientific research in healthcare contained in the law converting the so-called PNRR decree, which was definitively approved on April 23 (the same day as the approval of the DDL IA), which amends Article 110 of Legislative Decree No. 196/2003 (the Privacy Code).29 These provisions, among other things, eliminate the obligation of prior consultation with the Garante30 in cases of processing health-related data without the data subject’s consent for medical scientific research.

In short, the minimal safeguard applicable to the most sensitive data is, therefore, effectively circumvented, while any reference to data-management rules for profit-making companies is omitted. Economic development and cybersecurity are presented as areas in which the provision of quality data and its storage in national data centers are valued, but these remain merely principles. Reading the provision multiple times reveals that it is so vague as to be absurd, and so devoid of substance that one cannot even say it is “wrong”.

Article 6 then identifies activities carried out with the help of AI systems for purposes of national security and defense, including activities to protect national security in cyberspace, which are excluded from the scope of the bill.31

AI and Public Administration

Public administrations (PAs) may use artificial intelligence to increase the efficiency of their activities, reduce the time required to complete procedures, and improve both the quality and quantity of services provided to citizens and businesses, ensuring that users are informed of its operation and able to trace its use. The adoption of each measure remains the responsibility of officials and managers—a responsibility on which no further clarification is provided.

The text of 26 articles stipulates that its implementation, including by public administrations, must be financially neutral. No additional resources are allocated, apart from the one billion euros already earmarked for CDP Venture Capital and now differently channeled towards the entrepreneurial system—a sum that, according to the government, should leverage three billion euros of private investment.32

Italy is the first country to have launched an industrial policy on AI,” stressed Butti33—an emphatic, disproportionate, and technically incorrect statement. The industrial focus is present in all sectoral regulations. The real question is whether other ontologically more central aspects have been neglected in order to highlight and emphasize industrial development.

In this respect, it remains to be seen what the Italian Strategy will contain, since only an executive summary has been published so far, despite the fact that the bill dedicates Article 21 to it. The ten-page AgID document does, however, contain one revolutionary idea: the creation of a foundation to coordinate activities—that is, a new institution, conceived in the most traditional manner, though admittedly among the most organizationally flexible.

Another aspect of Butti’s statement also merits attention: he implies the existence of an industrial policy isolated from resources. Someone should explain to him what industrial policy is and why this assertion is erroneous.

These provisions, as noted earlier, eliminate the requirement for prior consultation of the Garante when processing health-related data for medical scientific research purposes without the consent of the data subject. The real crux of this regulatory framework is the omission of any reference to the management of data by for-profit companies.

These arguments have been criticized by many authors.34

AI in Administrative Justice: First Projects and Notes

In October of 2024, the General Secretariat of Administrative Justice published a document outlining ongoing activities related to the introduction of artificial intelligence technologies in administrative justice.35

The aim is to provide preliminary information on developments initiated by the Secretariat at a time when interest in the use of these technologies has reached particularly high levels, both due to their spread and technological advancement and as a result of regulatory initiatives at EU and national level.

The document sets out strategies, methodologies and use cases for AI. AI implementation projects in administrative justice concern:

(a) the anonymization of measures

(b) supporting judges’ work by searching and displaying content

(c) cyber resilience

On the first aspect, anonymization (a), the document explains the rationale adopted, emphasizing the need to reconcile two values: the confidentiality of the individuals concerned and the intelligibility of the measure, which could otherwise be compromised, including in its reasoning, by unsupervised anonymization.

“Achieving a balance between the two aspects (protection of confidentiality and comprehensibility of the text) in fact presupposes an interpretative activity that can only be carried out by judges, who, at that stage, help to define the intrinsic value to be assigned to each piece of information, also considering its relationship with the context, distinguishing the nature and characteristics of the disputes. Artificial intelligence is thus modelled to suit the variability of the context, to then verify in the course of concrete use, thanks to the feedback received from users (feedback duly verified and administered to the AI itself), the correctness of the results obtained still under the control of judges, who, in this executive phase, monitor the development of the system, i. e. the way it responds to the feedback offered by users, supervising its evolutionary path.”36

With regard to judicial support activities (b), those activities classified as high-risk under the AI Act are excluded from the scope of AI applications. Instead, the applications included are those intended to facilitate various judicial functions and which, as structured, “do not involve ‘creative’ activity.”

These are, in particular:

  • the identification of related or similar appeals pending before individual sections and to be scheduled for decision. As the report specifies, such identification makes it possible to achieve several objectives: optimizing study and analysis; assessing whether to discuss matters in the same hearing or in thematic hearings; avoiding conflicting decisions within individual sections; improving the distribution of workloads; and accelerating decision-making
  • the search for case-law precedents using a tool based not only on keywords, as is currently the case, but also on the detection of semantic connections, thereby ensuring a higher degree of relevance in search results
  • the detection and immediate visualization of the rules or case-law pronouncements cited, explicitly or implicitly, in a defense document, thus preventing the judge from interrupting the analysis to consult external databases, saving time, and avoiding unnecessary interruptions in concentration

As for cyber resilience, specific defense strategies have been implemented against internationally identified adversarial machine learning techniques. According to the document:

“The guideline followed is that of valorizing the positive impacts that can be derived from technological developments on the organization of work, but with clear attribution to this technology of an instrumental role, of support to the judge in the phase of study, updating and analysis. The processing activity remains exclusively entrusted to the judge. On a conceptual level, it is more appropriate to speak of ‘accelerated intelligence’ instead of ‘artificial intelligence’.”37

Machine learning models, especially deep learning models (i. e. models that simulate the action of the human brain through multilayer artificial neural networks, such as LLMs—large language models designed for linguistic purposes), are difficult to govern and may give rise to so-called hallucinations or to overfitting when the model adheres too closely to specific training data, or to overgeneralization, when the model extrapolates excessively.

Research in the field, notably at Stanford, has highlighted the difficulty these models encounter in performing legal reasoning, identifying negative impact factors such as a lack of uniformity and the excessive length of legal documents (Dahl et al., 2024). The approach must therefore be cautious: aware of risks, free from bias, yet characterized by critical scrutiny and constant supervision.

In the Italian legal system in particular, the centrality of a “human” judge is constitutionally mandated: jurisdictional functions are, by the Constitution,38 entrusted to a judge who is a natural person, pre-appointed by law, an impartial third party, and subject only to the law. Ethical values and the protection of rights cannot be delegated to technology; leadership cannot be automated. These considerations are reflected in the document itself. Nonetheless, the risk remains high due to the combination of uniform formats and AI applications integrated into SIGA (the administrative justice information system).

The use of standardized formats is, in itself, at odds with the originality inherent in the intellectual profession of the lawyer; with decorum; and, even more fundamentally, with the function of legal representation, which requires that the lawyer’s role not be confined within rigid formal structures that would inevitably impair the effectiveness of the defense.

At a time when legal design is under discussion39, these concerns require even greater emphasis as regards the format of judgments and legal acts.

A first step must be to guarantee cross-participation in decision-making processes concerning the introduction of AI in the administration of justice. This entails ensuring the participation of lawyers in regulating AI systems; regulating such systems at least through delegated legislation adopted after consultation with specialist legal associations; and involving experts from non-legal disciplines capable of identifying vulnerabilities and weaknesses that fall outside narrow procedural dynamics.

The use of artificial intelligence in administrative procedures should also be better regulated by imposing on public administrations the obligation to justify the recourse to AI as a function of real efficiency and improvement of specific services for citizens and businesses; guaranteeing users the knowability of its operation, the traceability of its use and, more generally, transparency; reserving autonomy and decision-making power to the official adopting the measure and/or to the person responsible for the procedure; and prohibiting the use of artificial intelligence for the generation of texts of any kind.

The subject is particularly delicate, especially in light of the repeal of the offence of abuse of office and the related cases before the Constitutional Court, because essential aspects remain undefined (Di Salvo, 2024b).

Justification of the procedure, an essential and central requirement under Law 241/1990,40 must be human, specific to the individual measure, not “per tabulas or generalized,” and must reflect and follow the entire procedural path anchored to the specific factual situation (Di Salvo, 2024a). Today, these intrinsic characteristics exclude the generic formula adopted via copy-paste, and, tomorrow, the mechanical delegation of reasoning to digital automatisms.

If motivation cannot, even when discretionary, devolve into arbitrariness when absent or merely “per tabulas,” this applies even more strongly in the case of algorithmic decision-making, where the objective weighting of each factor is neither disclosed nor verifiable. Without absolute and verifiable ex-ante transparency, the use of AI should be excluded a priori.

This holds even more firmly with regard to the reasoning of judicial decisions. On the one hand, the integration of AI tools into legal research may be desirable. Indeed, to some extent, they are already present in current search engines, which make extensive and often indiscriminate use of such systems. On the other hand, the nontransparent and unverifiable attribution of “weights” to results raises serious concerns: five precedents versus three, how are they weighted? How much weight is assigned to chronology, jurisdiction, or forum? What guarantees exist that arguments are traceable to the specific case? What ex ante safeguards against discrimination can be offered in a transparent manner?

These are merely preliminary observations, each of which would merit dedicated analysis involving expertise beyond the strictly legal domain.

Accordingly, the judge must articulate, fully and transparently, all aspects of this weighting process, beyond doubt and subject to verification. And in this sense, it is far from clear that the adoption of such systems will guarantee ex-post efficiency or procedural speed. It is more straightforward for a judge to clarify his own reasoning parameters than to explain those embedded in a programmed system, which may not be his own, and which he is not required to share, thereby triggering a new mechanism of counter-assessment.

AI and Judicial Review of Technical Discretion: The Possible Procedural Repercussions of a Massive Extension of AI in Administrative Activities Characterized by Technical Discretion

In recent years, we have witnessed increasing momentum toward the introduction of artificial intelligence in public administration (PA). In this context, we are not dealing with generative AI, but rather with a set of processes performed by an algorithmic system capable of going beyond the functioning of a “traditional” algorithm. AI-based systems rely on machine learning, enabling them to analyze data, make choices, and take decisions autonomously, even without human intervention.

Over time, the Council of State has expanded the circumstances in which public administrations may employ automated decision-making procedures. Whereas initially it was held that algorithms could be used only for activities of a binding nature, the judges have gradually adopted a more open position, concluding that:

“If recourse to computer tools may appear easier to use in relation to so-called binding activities, there is nothing to prevent [the purposes established by law], pursued with recourse to the computer algorithm, from also being pursued in relation to activities characterized by areas of discretion. Rather, if in the case of the bound activity much more relevant, both in quantitative and qualitative terms, may be the use of tools for the automation of data collection and evaluation, even the exercise of discretionary activity, especially technical, may in abstract benefit from the efficiencies and, more generally, the advantages offered by the tools themselves,”41 and that: “there are no reasons of principle, or rather concrete reasons, for limiting the use to binding rather than discretionary administrative activity, both of which are expressions of authoritative activity carried out in pursuit of the public interest.”42

It has been observed that extending automated administrative decisions to discretionary activities obliges the judge to verify the correctness and transparency of the procedure in all its components, thereby blurring the traditional distinction between legitimacy, which is subject to judicial review, and merit, which is not. In this manner, the automated decision reverses the relationship between the administration and the judge, strengthening the latter’s role in scrutinizing the correctness of the intersubjective relationship between citizen and public authority.

However, the Council of State has clarified that, to be legitimate, the PA’s use of an algorithm must comply with three important principles.

  • The principle of knowability: A reinforced expression of the principle of transparency according to which the PA must prepare instruments capable of clearly illustrating how the algorithm functions and how it affects the decision-making process in order to prevent citizens from being excluded from decisions that affect them. Otherwise, the correctness of the decision would be knowable only to a very small number of highly qualified individuals, and democracy would give way to technocracy.
  • The principle of algorithmic non-discrimination, which requires the administrative officer responsible for configuring the system to verify the input data, so as to prevent discriminatory output. It is undeniable that, thanks to its ability to process large volumes of data, automate complex and repetitive procedures (thus reducing waiting times for citizens), and provide predictive analyses capable of anticipating needs, AI can improve the efficiency of public service management and delivery, making them increasingly interactive and personalized. However, these advantages are offset by numerous risks, since the use of AI projects the decision-making process into a “black box”, an indecipherable dimension in which human beings find it difficult to orient themselves
  • The principle of non-exclusivity of the algorithmic decision, meaning that the mediating and interest-balancing role of the administrative officer is always required upstream. The algorithm is merely a procedural and investigative instrument and, as such, remains subject to all the verifications typical of any administrative procedure. Therefore, rather than replacing indispensable human activity or disempowering administrative officials, the algorithm must operate solely as a modus operandi—in other words, the algorithm must assume a servant role with respect to human decision-makers, not the other way around.

In recent years, legal scholarship has addressed the thorny issue of attributing responsibility for algorithmic decisions when the system produces outputs that cannot be ascribed either to the intention of the public authority or to the programmer (Grimmelikhuijsen & Meijer, 2022, pp. 232–242). One thesis holds that the use of automated procedures does not alter the liability regime, with the consequence that the decision remains imputable to the officeholder. A second thesis proposes that, by separating the phase preceding the compilation of the software, in which the programming rules are established, from the software that issues the act, a dualistic approach whereby the official would be liable for errors in the pre-software stage and the programmer for damages caused by the software.

Starting from the premise that, at present, recourse to AI is deemed precluded where highly debatable or non-standardizable assessment criteria are involved, and therefore whenever a concrete evaluation by the official is required, it becomes particularly interesting to consider the possible procedural consequences of a substantial expansion of AI use in administrative activities marked by technical discretion. One may think, for instance, of drafting competition questions and identifying correct answers which, as case law has repeatedly clarified, are an expression of technical discretion and are beyond judicial review, in the sense that the judge cannot challenge the accuracy of the answers deemed correct by the commission of experts but must limit himself to detecting flaws of legitimacy in the presence of actual errors.43

The real crux lies in the reviewability of the decision, which unfolds on several levels: the reviewability of the decision itself, of its “motivation”, and above all, of the algorithm. This would imply recognizing a substitutional jurisdictional review whenever the technical assessment carried out by AI proves inadequate, unreliable, or incorrectly applied. In such cases, and only in such cases, the judge, assisted by technical expertise, could replace the AI’s technical assessment with his own in order to avoid serious distortions of the constitutional framework.

This path appears to be the most rational and system-coherent. In the absence of effective ex-ante control by the administrative authority, an intensive ex-post review by the judiciary would become essential to correct the AI’s technical assessment.

This line of reasoning undermines theoretical doctrinal constructions,44 which claim that technical evaluations are reserved for the public administration. Among these stands the thesis that derives the existence of a reserve of technical assessment for the PA from the principle of good performance enshrined in Article 97 of the Constitution. Yet establishing that the PA must efficiently satisfy community needs does not mean excluding the possibility that other entities may be able to make technical assessments more efficiently. On the contrary, it is plausible that the Constituent Assembly introduced this rule precisely because the PA, as still happens today, struggled to guarantee an adequate standard of efficiency.

It follows that it would be unreasonable to assume a reservation of administrative authority over non-discretionary assessments. Even so, insofar as the adopted AI system forms part of the PA, such reservation would remain intact in a different form.

The principle of the fullness and effectiveness of judicial protection, derived from Articles 24 and 113 of the Italian Constitution,45 and the principle of equality of the parties in proceedings under Article 111(2), a corollary of due process and a specification of the broader principle of equality in Article 3, require that “all those judgments that do not impose the only choice reserved to the administration, which is that of opportunity/untimeliness, cannot be considered excluded from judicial review.”

This view has also been endorsed by the Strasbourg Court, which clarified that: “in a given case where full jurisdiction is contested, the proceedings could still satisfy the requirements of Article 6 § 1 of the Convention if the court deciding the matter would examine all of the plaintiff’s claims on the merits, point by point, without ever having to decline jurisdiction in answering the questions or establishing the facts. On the other hand, the Court has found violations of Article 6 § 1 of the Convention in other cases where national courts had considered themselves bound by the previous findings of administrative bodies, which were decisive for the outcome of the cases before them, without independently examining the relevant issues.”46

Thus, the judge could fully replicate the technical assessment carried out by the AI, even if complex, and would not be bound by it in any way. A judicial review limited to the verification of legitimacy or manifest unreasonableness of the robo-decision would contravene constitutional and European principles, as well as the case law of the European Court of Human Rights.

Recognizing the power of technical assessment in the hands of the public authority would require abandoning the traditional doctrine of weak intrinsic review, inaugurated by the Council of State in 2001. That doctrine permitted judges in the general jurisdiction of legitimacy to verify the suitability of the technical criterion used by the PA and the correctness of its application, but, applying the principle of separation of powers, not to replace the PA’s assessment. To uphold AI-based decisions, one would instead have to move toward strong intrinsic (and thus substitutive) review, which is currently considered admissible only in relation to sanctions imposed by the Competition and Market Authority.47

When PA Discrimination Comes from AI: An English case

In the UK, the AI system used to detect benefit fraud has proven discriminatory. The machine-learning program used to examine Universal Credit claims incorrectly targets individuals from certain groups more than others. This was recently revealed in an investigation published in The Guardian by Robert Booth on December 6, 2024: the AI system employed by the UK government to detect welfare fraud exhibits discrimination based on age, disability, marital status, and nationality.48

An internal evaluation of the machine-learning program used to analyze Universal Credit claims in England found that it disproportionately selected individuals from specific groups for investigation. In practice, the system disadvantaged particular racial communities when identifying alleged abuses of the welfare system.

The issue emerged following a records request submitted to the Department for Work and Pensions. Although the final decision on whether a person will receive unemployment benefits remains in the hands of a human decision-maker, officials maintain that continued use of the system, which aims to save approximately £8 billion per year lost to fraud and error, is “reasonable and proportionate.”

Activists responded by criticizing the government for adopting a policy of “first do wrong, then fix it” and called on ministers to ensure greater fairness toward groups that the algorithm wrongly suspects of attempting to cheat the system.

Recognition of disparities in the way the automated system assesses fraud risks is also likely to intensify government scrutiny of AI tools, while simultaneously fueling demands for greater transparency.

There are currently 55 automated tools used by public authorities in the UK that may influence decisions affecting millions of people, despite the fact that only nine are listed in the government’s register.

In recent years, government departments, including the Home Office, have been reluctant to disclose information about their use of artificial intelligence, justifying this secrecy on the grounds that increased transparency could enable malicious actors to manipulate the systems.

The British case is neither unique nor isolated. It stems from a well-known, but difficult-to-eliminate bias in the programming and weighting of data—a problem originating not from machines, but from human decision-making. Similar phenomena have already been identified in numerous predictive systems used in the United States.49

In this respect, adopting standardized models on the pretext that they are cheaper than developing new ones does not resolve the problem. Rather, like a virus, it spreads from one jurisdiction to another and into heterogeneous administrative settings.

It is clear that the historical configuration of a neighborhood in major urban centers often reflects the convergence of racial, social, and educational factors, which in turn affect income levels, access to healthcare, and employment types. An algorithm designed without correcting for these elements will inevitably assign greater risk scores to certain neighborhoods and social classes—precisely those where the need for subsidies is concentrated.

As the number of applications increases, so does the risk of fraud: if district A receives 10 applications and district B receives 100, it is statistically more likely that district B will show a higher incidence of fraud. However, this phenomenon must be interpreted socio-racially.

This bias was first detected in algorithms used to predict and determine health insurance premiums in the United States. Subsequent verification has shown that the same distortions and arbitrary weightings occur in numerous systems adopted by public administrations. This highlights the profound difficulty in identifying the root cause of the error and reprogramming the system of weights using non-discriminatory predictive models.

The problem lies in the uncritical adoption of pre-set products, rather than in the careful construction of specific, contextually grounded weighting systems.

Conclusions

In this article, I have examined the Italian context concerning the introduction of artificial intelligence into the legal sphere, and in particular, into administrative proceedings. This topic intersects with the even more delicate issue of the relationship between AI and criminal justice (Di Salvo, 2024a). While several stakeholders highlight the undeniable advantages associated with machine learning, hyper-computational capacity, and advanced research systems, the legal domain remains an exceptionally sensitive environment in which to test such innovations. The need for careful decision-making, for individualized reasoning on a case-by-case basis, and, above all, for the human point of view is central to decisions that are fair, just, and tailored to the individual.

I have critically discussed the newly introduced Italian legislation on AI to provide an overall framework, and I have illustrated the initial plans for integrating AI into the administrative sector. In the third section, I focused on the connection between AI and judicial review of technical discretion, highlighting the potential procedural implications of an extensive deployment of AI in administrative activities characterized by such discretion. Finally, in the fourth section, I analyzed a case of discriminatory administrative practice resulting from the use of AI systems by a public authority.

Taken together, these reflections reveal a clear need for jurists, first and foremost, to intervene with the technical competence and critical approach inherent in their profession in order to identify the concrete risks that uncritical and pervasive use of AI systems may pose to citizens’ rights within the sensitive context of justice and litigation.

1. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No. 300/2008, (EU) No. 167/2013, (EU) No. 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (text with EEA relevance), O.J. L, 2024/1689, 12.7.2024, https://eur-lex.europa.eu/eli/reg/2024/1689/oj

2. Disegno di legge “Norme per lo sviluppo e l’adozione di tecnologie di intelligenza artificiale” [Bill “on the development and adoption of AI technoligies], n. 1066AS, 23 April 2024 https://www.senato.it/service/PDF/PDFServer/BGT/01411729.pdf (It.)

3. Ibid.

4. Disegno di Legge “Disposizioni e deleghe al Governo in materia di intelligenza artificiale” [Bill “on the Provisions and powers delegated to the Government regarding artificial intelligence”] [D.d.l. Intelligenza artificiale], A. S. 1146, Legislatura XIX, Art. 5. https://www.senato.it/leggi-e-documenti/disegni-di-legge/scheda-ddl?did=59313 (It.).

5. D.d.l. Intelligenza artificiale, A. S. 1146, Legislatura XIX, Art. 1 (It.).

6. Costituzione Italiana [Constitution of the Italian Republic] (It.).

7. D.d.l. Intelligenza artificiale, A. S. 1146, Legislatura XIX, Art. 3 (It.).

8. In Italy civil proceedings on average take 7 years to complete the process.

9. The issue is linked to a broader one — too broad to be dealt with here: in order to verify the data, the algorithm would have to be made public, but if the algorithm is made public, private companies claim that this constitutes a violation of industrial property rights.

10. D.d.l. Intelligenza artificiale, A. S. 1146, Legislatura XIX, Art. 23.

11. Ibid., Art. 5.

12. See Cass. Civ., Sez. III [Court of Cassation], 2023, n. 8771; Cass. Civ., Sez. III, 2022, n. 9143; Cass. Civ., Sez. III, 2021, n. 25672; Cass. Civ., Sez. III, 2020, n. 14437; Cass. Civ., Sez. III, 2019, n. 17483; Cass. Civ., Sez. III, 2017, n. 1218; Cass. Pen., Sez. IV, 2016, n. 4631; Cass. Civ., Sez. III, 2015, n. 19669; Cass. Civ., Sez. III, 2014, n. 21911; Cass. Civ., Sez. III, 2013, n. 1353.

13. This was indicated to the author by Google in a personal communication upon inquiring.

14. In Italy it’s Legge “Protezione del diritto d’autore e di altri diritti connessi al suo esercizio” [Law on “Protection of copyright and other rights related to its exercise”], n. 633, 16 July 1941 (It.)

15. This is a requirement specified in the writ of summons. If this is not indicated, the case will be dismissed.

16. Bridge Design is a legal-design model aimed at creating a transitional and functional link between an existing regulatory, contractual or factual situation and a new desired structure, overcoming a temporal, regulatory or resource “gap”. The objective is to ensure continuity, stability and progressiveness in the transition, avoiding abrupt interruptions or regulatory gaps.

17. D.d.l. Intelligenza artificiale, A. S. 1146, Legislatura XIX, Art. 23.

18. Short for “Intelligenza Artificiale” (Artificial Intelligence) in Italian.

19. Legge “Protezione del diritto d’autore e di altri diritti connessi al suo esercizio” [Law on “Protection of copyright and other rights related to its exercise”], n. 633, 16 July 1941 (It.)

20. The three requirements (skill, labour, and judgement) as the basis for assessing the originality of a work derive from a landmark ruling by the Supreme Court of Canada, which rejected the doctrine of “sweat of the brow” as a sufficient criterion for copyright protection (CCH Canadian Ltd. v. Law Society of Upper Canada, [2004] 1 S.C.R. 339).

21. Dipartimento per la Trasformazione Digitale. (2024, April 23). Conferenza stampa del Consiglio dei Ministri del 23 aprile 2024 [Video]. YouTube. https://www.youtube.com/watch?v=dnmckrKC-iM

22. Article 612-quater Codice Penale [Cod. Pen.] [Criminal Code] (It.).

23. Articles 61(11-novies), 494 and 501 Cod. Pen. [Criminal Code] (It.).

24. Amendment to Law No. 633 of 1941 (Copyright Law)

25. Costituzione Italiana [Constitution of the Italian Republic], Art. 27.

26. Article 612-quater Cod. Pen. [Criminal Code] (It.).

27. Article 422 Cod. Pen. [Criminal Code] (It.).

28. D.d.l. Intelligenza artificiale, A. S. 1146, Legislatura XIX, Art. 8.

29. The term “PNRR decree” refers to several decree-laws issued to implement the National Recovery and Resilience Plan (Piano nazionale di ripresa e resilienza), in particular those that introduced urgent measures in the areas of employment, education and universities. Among the most recent are Decreto Legge “Disposizioni urgenti in materia di lavoro, università, ricerca e istruzione per una migliore attuazione del Piano nazionale di ripresa e resilienza” [Decree-Law on “Urgent measures concerning employment, universities, research and education for better implementation of the National Recovery and Resilience Plan”], n. 160, 28 October 2024 (It.), and Decreto Legge “Ulteriori disposizioni urgenti in materia di attuazione delle misure del Piano nazionale di ripresa e resilienza e per l’avvio dell’anno scolastico 2025/2026” [Decree-Law on “Further urgent provisions regarding the implementation of measures under the National Recovery and Resilience Plan and the start of the 2025/2026 school year”], n. 45, 7 April 2025 (It.).

30. The “Garante della Privacy” or Privacy Authority is the Italian independent administrative authority responsible for supervising, enforcing and providing guidance on the application of personal data protection legislation. Its authority derives mainly from Regulation 2016/679, of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to The Processing of Personal Data and On the Free Movement of Such Data, And Repealing Directive 95/46/EC (General Data Protection Regulation), 2016 OJ (L 119), which in Article 51 provides for the establishment of independent supervisory authorities in each Member State, and specifically Decreto Legislativo [Legislative Decree], n. 101, 10 August 2018, the updated “Privacy Code”, which defines in detail its organisation, powers and procedures.

31. D.d.l. Intelligenza artificiale, A. S. 1146, Legislatura XIX, Art. 6

32. D.d.l. Intelligenza artificiale, A. S. 1146, Legislatura XIX.

33. Dipartimento per la Trasformazione Digitale. (2024, April 23). Conferenza stampa del Consiglio dei Ministri del 23 aprile 2024 [Video]. YouTube. https://www.youtube.com/watch?v=dnmckrKC-iM

34. See Di Giacomo, L. (2024, August 6). Intelligenza artificiale e il diritto: Un binomio in evoluzione. https://www.diritto.it/intelligenza-artificiale-diritto-binomio-evoluzione/ ; Di Giacomo, L. (2025, June 26). Disegno di legge n. 1146/2024 (Ddl Intelligenza Artificiale): Ok dalla Camera. https://www.diritto.it/disegno-legge-n-1146-2024-intelligenza-artificiale/ ; Donini, F. (2024, July 16). L’intelligenza artificiale ed il sistema di giustizia predittiva. https://www.diritto.it/intelligenza-artificiale-sistema-giustizia/ ; Magnolo, F. (2025, January 16). Intelligenza artificiale e copyright: Riflessioni sull’AI act. https://www.diritto.it/intelligenza-artificiale-copyright-riflessioni-ai/ ; Sorrentino, C. (2025, January 7). AI e Diritto alla spiegazione nell’uso degli algoritmi decisionali. https://www.diritto.it/intelligenza-artificiale-diritto-spiegazione-uso/

35. Segretariato Generale della Giustizia Amministrativa, Servizio per l’Informatica. (2024, October). Intelligenza artificiale e giustizia amministrativa: Strategie di impiego, metodologie e sicurezza. https://www.dirittobancario.it/wp-content/uploads/2024/10/Report-Segretariato-Generale-della-Giustizia-Amministrativa-ottobere-2024.pdf

36. Segretariato Generale della Giustizia Amministrativa, Servizio per l’Informatica. (2024, October). Intelligenza artificiale e giustizia amministrativa: Strategie di impiego, metodologie e sicurezza. https://www.dirittobancario.it/wp-content/uploads/2024/10/Report-Segretariato-Generale-della-Giustizia-Amministrativa-ottobere-2024.pdf

37. Ibid.

38. Costituzione Italiana [Constitution of the Italian Republic].

39. Di Salvo, M. (2024, April 10). Dalla chiarezza dell’atto al legal design. https://www.diritto.it/dalla-chiarezza-dell-atto-al-legal-design/

40. Legge “Nuove norme in materia di procedimento amministrativo e di diritto di accesso ai documenti amministrativi” [Law on “New regulations governing administrative proceedings and the right of access to administrative documents”], n. 241, 18 August 1990 (It.)

41. Cons. Stato [Council of State], Sez. VI, 13 December 2019, no. 84723 (It.).

42. Cons. Stato [Council of State], Sez. VI, 4 February 2020, no. 881 (It.).

43. Tribunale Amministrativo Regionale Lazio [Tribunale Amministrativo Regionale Lazio], sez. Quarta Ter, 27 July 2023, Order no. 4567.

44. This is the general orientation of the Council of State in Italy.

45. Costituzione Italiana [Constitution of the Italian Republic], Art. 23, 113.

46. Družstevní Záložna Pria & Others v. Czech Republic, App. No. 72034/01 (Eur. Ct. H.R. July 31, 2008), https://hudoc.echr.coe.int/eng?i=001-87882

47. Cons. Stato [Council of State], Sez. VI, 2019, n. 4990 (It.).

48. Booth, R. (2024, December 6). Revealed: Bias found in AI system used to detect UK benefits fraud. The Guardian. https://www.theguardian.com/society/2024/dec/06/revealed-bias-found-in-ai-system-used-to-detect-uk-benefits

49. Johnson, K. (2023, January 21). Gli algoritmi che discriminano chi cerca casa negli Stati Uniti. Wired Italia. https://www.wired.it/article/intelligenza-artificiale-stati-uniti-discriminazione-casa/ ; Verga, E. (2023, October 10). Algoritmi discriminatori, da New York una legge che potrebbe influenzare le future applicazioni di intelligenza artificiale. https://tech4future.info/algoritmi-discriminatori/ ; Meo (2021); Falchi (2020).

* Ed. note: By decision of the authorities of the Russian Federation, Meta Platforms, Inc. has been declared an extremist organization, and its activities are prohibited on the territory of Russia.

References

1. Dahl, M., Magesh, V., Suzgun, M., & Ho, D. E. (2024). Large legal fictions: Profiling legal hallucinations in large language models. Journal of Legal Analysis, 16(1), 64–93. https://doi.org/10.1093/jla/laae003

2. Di Salvo, M. (2024a). Artificial Intelligence and the cyber utopianism of justice. Why AI is not intelligence and man’s struggle to survive himself. Russian Journal of Economics and Law, 18(1), 264–279. https://doi.org/10.21202/2782-2923.2024.1.264-279

3. Di Salvo, M. (2024b). Sull’abolizione del reato di abuso d’ufficio. Salvis Juribus. http://www.salvisjuribus.it/sullabolizione-del-reato-di-abuso-dufficio

4. Falchi, M. C. (2020). Intelligenza artificiale: Se l’algoritmo è discriminatorio. Ius in Itinere. https://iusinitinere.it/intelligenza-artificiale-se-lalgoritmo-e-discriminatorio/

5. Grimmelikhuijsen, S., & Meijer, A. (2022). Legitimacy of Algorithmic Decision-Making: Six Threats and the Need for a Calibrated Institutional Response. Perspectives on Public Management and Governance, 5(3), 232–242. https://doi.org/10.1093/ppmgov/gvac008

6. Meo, M. (2021). L’intelligenza artificiale discrimina, eccome. Ecco perché e come rimediare. Agenda Digitale. https://www.agendadigitale.eu/cultura-digitale/lintelligenza-artificiale-discrimina-eccome-ecco-perchee-come-rimediare/


About the Author

M. Di Salvo
National Agency for Artificial Intelligence Foundation
Italy

Michele Di Salvo — Doctor of Law (University Naples Federico II), Research Coordinator

16, Via Giuseppe Revere, Milano, 20123



Review

Views: 169

JATS XML


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 2686-9136 (Online)