Preview

Digital Law Journal

Advanced search
Onilne First

ARTICLES

81
Abstract

The adoption of the EU Artificial Intelligence Act (AI Act) established mandatory life-cycle regulation of AI systems in the European Union while preserving the validity of the General Data Protection Regulation (GDPR). The training stage of AI models has consequently become a point of intersection between two regulatory regimes: while the AI Act emphasizes data quality and representativeness along with risk management and documentation of training processes, the GDPR sets out the applicable principles of lawfulness, data minimization, purpose, and storage limitation, as well as providing data subjects with a set of safeguards and remedies. In practical terms, this interaction creates a risk of legally defective model training due to the pursuit of representativeness through excessive data collection and repeated re-use of personal data. This article examines the permissibility and organization of AI model training under the joint application of the AI Act and the GDPR. The research sets out to substantiate a legal model that enables proportionate technical and organizational safeguards while preserving training quality and ensuring the lawfulness of personal data processing that respects the fundamental rights of data subjects. As well as combining doctrinal legal analysis of the AI Act requirements on risk management and data governance with a comparative assessment of the GDPR principles and procedural tools for ensuring lawful processing, the methodology involves a systematization of typical governance artefacts used in the development and deployment of high-risk AI systems. The results are presented as an integrated compliance-by-design model for actors involved in the training stage. A practical distinction between an “AI system” and an “AI model” is substantiated: whereas an AI system is qualified as an organizational and technical envelope comprising the model, infrastructure, input and output interfaces, monitoring, and human interaction, an AI model is treated as the algorithmic core trained on data and used to infer outputs. This distinction can be applied to allocate obligations between the provider and entities deploying or operating the system. The proposed mechanism for reconciling dataset representativeness and accuracy with the GDPR data minimization principle through a documented feature inventory is based on a necessity rationale for each class of data and the exclusion of irrelevant attributes alongside an assessment of indirect discrimination risks. The choice of safeguards (pseudonymization, anonymization, aggregation, synthetic generation, and differential privacy) to data sensitivity, use context, and the level of risk to fundamental rights is carried out on the basis of a proportionality model. This model is supported by the outcomes of a data protection impact assessment and a fundamental rights impact assessment. Finally, a practical legal governance loop for the training life cycle is formulated to cover the determination of the purpose and legal basis, limits on dataset re-use, access control and logging, as well as retention and deletion rules, along with procedures for revisiting training parameters and monitoring after deployment. The proposed model increases legal certainty and provides a reproducible framework for aligning the AI Act and GDPR during the training stage.

93
Abstract

As a result of the rapid development of information technology and associated increase in cybercrime, the need to ensure international information security has become a critical challenge for the global community. This article analyzes the prospects of the UN Convention against Cybercrime adopted on 24 December 2024 as a mechanism for coordinating the efforts of states in the field of international information security. The aim was to evaluate this international treaty as a basis for cooperation between states in the field of international information security and combating cybercrime. Particular attention is paid to the provisions of the Convention concerning the obligation of states to criminalize the list of acts established by the treaty, as well as ensuring human rights compliance in connection with its application. in the course of the study, the author used historical-legal, formal-legal, and comparative-legal methods to assess certain aspects of international and regional regulation of cybercrime. The empirical basis of the article comprises normative legal acts and those of a recommendatory nature in the field of international information security, as well as legal doctrine devoted to the problems of this industry. The findings can be summarized as follows: the adoption of the international treaty under analysis is an attempt to balance the opposing positions of the participants in the negotiation process regarding the need for universal regulation of cybersecurity with concerns about human rights violations. While the Convention sets out to resolve controversial issues, several significant shortcomings are identified, including a high probability of quickly becoming obsolete due to the complexity of carrying out amendments and the risk of permitting the broad interpretation of certain provisions, which hinders the unification of practice. In particular, the Convention lacks an effective control mechanism for monitoring the fulfilment of obligations. While noting these shortcomings and associated legal risks, the Convention represents the first universal international treaty in the field of international information security and can be recommended for adoption into force.

ESSAYS

124
Abstract

The expansion of digital technologies has reshaped the exercise of fundamental rights, prompting growing scholarly and regulatory attention to the notion of digital human rights. As digital platforms increasingly structure communication, access to information, and social participation, existing legal categories face conceptual and practical strain. While some accounts portray digital rights as a straightforward extension of classical human rights, others emphasize their transformative impact on constitutional principles, enforcement mechanisms, and the distribution of power between public authorities and private actors. This paper situates digital rights within contemporary academic debates and emerging regulatory frameworks in order to clarify their normative scope and conceptual boundaries. It advances the argument that digital rights cannot be adequately understood through purely legal or purely technological lenses. Instead, they emerge at the intersection of constitutional law, digital governance, and public policy, where regulatory instruments, institutional design, and educational strategies jointly shape the conditions for rights protection. The analysis highlights the constitutional paradox of digital platforms, which exercise functions traditionally associated with public authority while remaining only partially subject to democratic accountability and judicial oversight. Drawing on European constitutional principles, supranational regulation, and policy initiatives, the study demonstrates how current legal frameworks seek to respond to private digital power while revealing their structural limits in data-driven and algorithmic environments. At the same time, scholarship on Media and Information Literacy is mobilized to show how citizens’ informational capacities function as a normative complement to legal safeguards, enabling individuals to exercise their rights meaningfully rather than merely formally. By integrating legal doctrine, public policy analysis, and MIL, this article contributes a coherent analytical framework for understanding digital rights as a hybrid normative construct. It concludes that the effective protection of digital rights depends not only on legal guarantees and regulatory enforcement, but also on policy choices that strengthen individual and collective capacities within the digital public sphere.

14
Abstract

Digital technologies and artificial intelligence, which are increasingly used to address general tasks across all areas of society, may also serve as technical means for enforcing the rights of children and separately-living parents to maintain contact with one another. Delineating the internationally recognized conceptual model of “virtual parenting” as an additional means of communication between a child and a parent residing separately, the study substantiates the possibility of applying such a model to the construction of family relationships and corresponding regulation at the conflict stage within the Russian legal framework. Using a formal legal methodology, the author provides a comparative legal review of the application of contemporary foreign applications (software programs) designed to ensure a neutral digital environment for such communication. Due to the need for “fine-tuning” interpersonal relationships within the framework of the family, representing one of the most complex social institutions, works from other disciplines—primarily psychology and sociology—were also analyzed. The need to develop and implement a state-run online platform—provisionally entitled “The Territory of Family Communication and Trust”— powered by artificial intelligence for enabling virtual communication between a child and a parent is outlined. An examination of law enforcement and judicial practice demonstrates that, despite modern legal systems formally granting a separately residing parent the right to communicate with—and participate in the upbringing of—a child, in cases of resistance by the other parent, the practical realization of this right becomes difficult and, in some cases, impossible. It is argued that, in certain cases, access to such a platform should be granted to state authorities (for example, bailiffs), and that digital reports generated by the platform should be endowed with evidentiary legal force. It is concluded that, in the digital era, it is necessary to leverage the capabilities of technologies and artificial intelligence to support family and society institutionally through a more flexible mechanism for enforcing the right to family communication and preventing potential abuses by one of the parents.



Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 2686-9136 (Online)