Skip to Main Content

Artificial Intelligence and Law

Selected e-articles

Abstract: This article addresses a critical gap in the current AI regulatory discourse by focusing on the environmental sustainability of AI and technology more broadly, a topic often overlooked both in environmental law and in technology regulation, such as the General Data Protection Regulation (GDPR) or the EU AI Act. Recognizing AI's significant impact on climate change and its substantial water consumption, especially in large generative models like ChatGPT, GPT-4, or Gemini, the article aims to integrate sustainability considerations into technology regulation, in three steps. First, while current EU environmental law does not directly address these issues, there is potential to reinterpret existing legislation, such as the GDPR, to support sustainability goals. Counterintuitively, the article argues that this also implies the need to balance individual rights, such as the right to erasure, with collective environmental interests. Second, based on an analysis of current law, and the proposed EU AI Act, the article suggests a suite of policy measures to align AI and technology regulation with environmental sustainability. They extend beyond mere transparency mechanisms, such as disclosing greenhouse gas footprints, to include a mix of strategies like co-regulation, sustainability by design, restrictions on training data, and consumption caps, potentially integrating AI and technology more broadly into the EU emissions trading regime. Third, this regulatory toolkit could serve as a blueprint for other technologies with high environmental impacts, such as blockchain and metaverse applications. The aim is to establish a comprehensive framework that addresses the dual fundamental societal transformations of digitization and climate change mitigation.

Abstract: The emergence of increasingly capable artificial intelligence (AI) systems has raised concerns about the potential extreme risks associated with them. The issue has drawn substantial attention in academic literature and compelled legislators of regulatory frameworks like the European Union AI Act (AIA) to readapt them to the new paradigm. This paper examines whether the European Parliament’s draft of the AIA constitutes an appropriate approach to address the risks derived from frontier models. In particular, we discuss whether the AIA reflects the policy needs diagnosed by recent literature and determine if the requirements falling on providers of foundation models are appropriate, sufficient, and durable. We find that the provisions are generally adequate, but insufficiently defined in some areas and lacking in others. Finally, the AIA is characterized as an evolving framework whose durability will depend on the institutions’ ability to adapt to future progress.

Abstract: There will also be a new 'Al Office' attached to the European Commission, after the act is approved by the European Parliament and the European Council (which comprises member states' heads of government). Lessons learnt from the regulation of existing technologies, from medicines to motor vehicles, include the need for maximum possible transparency, for example, in data and models. [...]those responsible for protecting people from harm need to be independent of those whose role it is to promote innovation. The EU, to its credit, has much experience of drawing on natural and social science, along with engineering and technology,businessand civil society, in its law-making.

Abstract: EU nations' governments approved the legislation on 2 February, and the law now needs final sign-off from the European Parliament, one of the EU's three legislative branches; this is expected to happen in April. The law bans Al systems that carry 'unacceptable risk', for example those that use biometric data to infer sensitive characteristics, such as people's sexual orientation. Some don't think the laws go far enough, leaving "gaping" exemptions for military and national-security purposes, as well as loopholes for Al use in law enforcement and migration, says Kilian Vieth-Ditlmann, a political scientist at AlgorithmWatch, a Berlinbased non-profit organization that studies the effects of automation on society.

Abstract: Technological developments enable modern cars to drive autonomously. The EU has embraced this phenomenon in the hope that such technology can ameliorate mobility and environmental problems and has therefore engaged in tailoring technical solutions to driving automation in Europe. But driving automation, like other uses of AI, raises novel legal issues, including in criminal law – for instance when such vehicles malfunction and cause serious harm. By only pushing for a technological standard for self-driving cars, are EU lawmakers missing necessary regulatory aspects? In this article, we argue that criminal law ought to be reflected in EU strategy and offer a proposal to fill the current gap, suggesting an approach to allocate criminal liability when humans put AI systems in the driver’s seat.

Abstract: According to the European Commission, one of the main objectives of the regulatory framework that this EU institution is currently proposing in the field of Artificial Intelligence is to “increment trust in the use of artificial intelligence.” Therefore, this paper explores the issue of trust and AI. The questions that it attempts to answer are the following. Why is trust important? Why is trust important, in particular, in the domain of AI? How does the EU Commission intend to achieve the objective of incrementing trust in the use of AI? Will the proposed regulatory framework achieve its proclaimed end?

Abstract: The aim of this paper is to discuss the management aspect of artificial intelligence development policy by the national regulators of the 27 European Union (EU) member states. The solutions formulated by three of them—Germany, France (as a pioneers), and Poland—are analyzed in depth. Conclusions The obtained results allow us to conclude that out of 27 EU countries, only one has not yet prepared its strategic project on artificial intelligence (AI) development, while among the existing policies one can find significant differences in the approach to the matter of strategic management. Methodology The research methods used are content analysis and comparative analysis of selected source documents. In the course of the deliberations presented, key legal acts concerning the areas of AI and modern technologies are cited. Practical applications The presented work formulates a starting point for further research and directions of changes in the management of AI development policy. The attention of subsequent researchers should focus on the possibility of a detailed analysis of other documents published by EU member states, a comparison of policies of other countries across the world, and even an attempt to examine the global dimension of AI strategies (...).

Abstract: This special issue of the European Labour Law Journal, edited by Jeremias Adams-Prassl, Halefom Abraha, Aislinn Kelly-Lyth, Sangh Rakshita and Michael ‘Six’ Silberman, explores the regulation of Algorithmic Management in the European Union and beyond. In our guest editorial, we set out the background to the project, introduce the reader to the key themes and highlights of the papers to follow, and acknowledge the support that the project has enjoyed.

Abstract: Advances in technology have transformed and expanded the ways in which policing is run. One new manifestation is the mass acquisition and processing of private facial images via automatic facial recognition by the police: what we conceptualise as AFR-based policing. However, there is still a lack of clarity on the manner and extent to which this largely-unregulated technology is used by law enforcement agencies and on its impact on fundamental rights. Social understanding and involvement are still insufficient in the context of AFR technologies, which in turn affects social trust in and legitimacy and effectiveness of intelligent governance. This article delineates the function creep of this new concept, identifying the individual and collective harms it engenders. A technological, contextual perspective of the function creep of AFR in policing will evidence the comprehensive creep of training datasets and learning algorithms, which have by-passed an ignorant public. We thus argue individual harms to dignity, privacy and autonomy, combine to constitute a form of cultural harm, impacting directly on individuals and society as a whole (...).

Abstract: This article critically examines the inception of the recent European Commission (EC) proposal for a regulation laying down harmonization rules for Artificial Intelligence (AI Act). By establishing a four-level taxonomy of AI-related risks (non-high, limited, high, unacceptable) and corresponding technical standards, this instrument aims at preventing the occurrence of risks caused, in particular, by so-called high-risk AI systems. Though by virtue of its purpose and design, the AI Act follows a risk-based approach to regulation, it displays a specificity when compared to existing risk regulation in the European Union (EU), in such areas as environment and health. This specificity stems from the operative definition of risk the AI Act relies on: the risks covered in this proposal are not scientifically measurable threats of physical harm but threats of human rights violations which are difficult to quantify. In light of this, this article raises the issue of the evidence, if any, the EC gathered in view of designing a proportionate (i.e., reality conform) regulatory framework on AI (...).

Abstract: An issue that is characteristic of AI is data processing on a massive scale (giga data, Big Data). This issue is also important because of the proposition to require manufacturers to equip AI systems with a means to record information about the operation of the technology, in particular the type and magnitude of the risk posed by the technology and any negative effects that logging may have on the rights of others. Data gathering must be carried out in accordance with the applicable laws, particularly data protection laws and trade secret protection laws. Therefore, it is necessary to determine the applicable law in line with existing conflict-of-law regulations.

Abstract: For 'high-risk' uses, which include software in law enforcement and education, the act requires detailed documentation, that all use of AI systems is automatically logged and that the systems are tested for their accuracy, security and fairness. The EU would require providers of foundation models to compile and publish a summary of copyright-protected material used for their training data, and to train their models to safeguard them from generating law-breaking content. [...]for recommendation and content-moderation AI algorithms in particular, the EU last year adopted the Digital Services Act, which aims to stem the flow of dangerous content online. In October 2022, the White House Office of Science and Technology Policy (OSTP) did release a Blueprint for an AI Bill of Rights, a white paper describing five principles meant to guide the use of AI, as well as potential regulations.

Abstract: Artificial intelligence (AI) finds increasingly growing applications in the working environment. Its importance has been recognised by the European Parliament and the European Commission, as reflected in the legislation prepared at the European Union level. As the use of AI creates new risks hitherto unknown from an Occupational Health and Safety (OHS) perspective, the question is whether the proposed EU regulations address these risks. The starting point for further consideration should be an analysis of the proposed changes to EU law in the context of the general principles of labour law. In addition to proposals to amend EU law on artificial intelligence, this article examines current occupational safety and health legislation. Issues related to occupational safety and health monitoring of employers using artificial intelligence were also the subject of the study. The social sciences' perception of human labour is not insignificant in assessing the new relationship at the employer-AI-employee level. The proposed model for regulating AI by the EU legislator is insufficient. First and foremost, there is no clear indication of the employer's obligations towards employees concerning occupational health and safety (...).

Abstract: The emergence of AI is a topic still fresh and new for law scholars. The aim of the Regulation regarding AI is to present a unified and harmonized core legislation, from which the EU Commission and Member States to tackle the growing aspects concerning this new sector of economic market, social and administration. As it will still be seen in the present article, the EU legislator is still fixed on the existing legislation, known to us until now, governing strict rules as response to some countries in Asia having made use of facial, biometric and location recognition AI to control their people and also to award behavioural points and keep “score” of the perfect citizen (...).

Abstract: The history of high-tech regulation is a path studded with incidents. Each adverse event allowed the gathering of more information on high technologies and their impacts on people, infrastructure, and other technologies, posing the bases for their regulation. With the increasing diffusion of artificial intelligence (AI) use, it is plausible that this connection between incidents and high-tech regulation will be confirmed for this technology as well. This study focuses on the role of AI incidents and an efficient strategy of incident data collection and analysis to improve our knowledge of the impact of AI technologies and regulate them better. To pursue this objective, the paper first analyses the evolution of high-tech regulation in the aftermath of incidents. Second, the paper focuses on the recent developments in AI regulation through soft and hard laws. Third, this study assesses the quality of the available AI incident databases and their capacity to provide information useful for opening and regulating the AI black box. This study acknowledges the importance of implementing a strategy for gathering and analysing AI incident data and approving flexible AI regulation that evolves with such a new technology and with the information that we will receive from adverse events-an approach that is also endorsed by the European Commission and its proposal to regulate and harmonise rules on AI.

Abstract: Personal autonomy is at the core of liberal societies, and its preservation has been a focus of European Union (EU) consumer and data protection law. Professionals increasingly use artificial intelligence in consumer markets to shape user preferences and influence their behaviours. This paper focuses on the long-term impact of artificial intelligence on consumer autonomy by studying three specific commercial practices: (1) dark patterns in user interfaces; (2) behavioural advertising; and (3) personalisation through recommender systems. It explores whether and to what extent EU regulation addresses the risks to consumer autonomy of using artificial intelligence in markets in the long term. It finds that new EU regulation does bring novelties to protect consumer autonomy in this context but fails to sufficiently consider the long-term consequences of autonomy capture by professionals. Finally, the paper makes several proposals to integrate the long-term risks affecting consumer autonomy in EU consumer and data protection regulation. It does so through an interdisciplinary approach, drawing from legal research and findings in the study of long-term thinking, philosophy and ethics and computer science.

Data Protection Notice   Cookie Policy & Inventory
Library Catalogue
Journals on all devices
Books, articles, EPRS publications & more
Newspapers on all devices