Skip to Main Content

Artificial Intelligence and Ethics

Selected e-articles

  • AI and ethics.; 2020-2024; Cham : Springer; Began with: Volume 1, issue 1 (February 2021)

Abstract: AI and Ethics seeks to promote informed debate and discussion of the ethical, regulatory, and policy implications that arise from the development of AI. It will focus on how AI techniques, tools, and technologies are developing, including consideration of where these developments may lead in the future. The journal will provide opportunities for academics, scientists, practitioners, policy makers, and the public to consider how AI might affect our lives in the future, and what implications, benefits, and risks might emerge. Attention will be given to the potential intentional and unintentional misuses of the research and technology presented in articles we publish. Examples of harmful consequences include weaponization, bias in face recognition systems, and discrimination and unfairness with respect to race and gender.

  • What is AI Ethics?; Lambrecht, Felix; Moreno, Marina; American Philosophical Quarterly; October 2024, Vol. 61 Issue 4, p. 387-401.

Abstract: Artificial intelligence (AI) is booming, and AI ethics is booming with it. Yet there is surprisingly little attention paid to what the discipline of AI ethics is and what it ought to be. This paper offers an ameliorative definition of AI ethics to fill this gap. We introduce and defend an original distinction between novel and applied research questions. A research question should count as AI ethics if and only if (i) it is novel or (ii) it is applied and has gained new importance through the development of AI. We argue that a lack of such a definition contributes to six disciplinary problems: ethics washing and lobbying, limited applicability, dilution of the field, conceptual bloating, costs of AI ethics, and an internal dispute. Based on our definition, we construct a methodological framework for AI ethics and show how it helps address these problems.

Abstract: Artificial intelligence (AI) is rapidly reshaping our world. As AI systems become increasingly autonomous and integrated into various sectors, fundamental ethical issues such as accountability, transparency, bias, and privacy are exacerbated or morph into new forms. This introduction provides an overview of the current ethical landscape of AI. It explores the pressing need to address biases in AI systems, protect individual privacy, ensure transparency and accountability, and manage the broader societal impacts of AI on labor markets, education, and social interactions (...).

Abstract: The article "Homo ex machina. Ethics of artificial intelligence and digital law in the face of the horizon of technological singularity" analyzes the relationship between technology, ethics, and legal science. The author, Fernando H. Llamo Alonso, professor of Philosophy of Law at the University of Seville, highlights the importance of integrating humanities and sciences in education. Topics such as technological singularity, digital justice, the legal ethics of artificial intelligence, and the relationship between humans and algorithms are addressed. The need to maintain a connection with humanistic knowledge and the capacity for critical analysis in law is emphasized.

Abstract: The popularisation of Artificial Intelligence (AI) technologies has sparked discussion about their ethical implications. This development has forced governmental organisations, NGOs, and private companies to react and draft ethics guidelines for future development of ethical AI systems. Whereas many ethics guidelines address values familiar to ethicists, they seem to lack in ethical justifications. Furthermore, most tend to neglect the impact of AI on democracy, governance, and public deliberation. Existing research suggest, however, that AI can threaten key elements of western democracies that are ethically relevant. In this paper, Rawls’s theory of justice is applied to draft a set of guidelines for organisations and policy-makers to guide AI development towards a more ethical direction.

Abstract: Under its proposed Artificial Intelligence Act (‘AIA’), the European Union seeks to develop harmonised standards involving abstract normative concepts such transparency, fairness, and accountability. Applying such concepts inevitably requires answering hard normative questions. Considering this challenge, we argue that there are three possible pathways for future standardisation under the AIA. First, European standard-setting organisations (‘SSOs’) could answer hard normative questions themselves. This approach would raise concerns about its democratic legitimacy. Standardisation is a technical discourse and tends to exclude non-expert stakeholders and the public at large. Second, instead of passing their own normative judgments, SSOs could track the normative consensus they find available. By analysing the standard-setting history of one major SSO, we show that such consensus tracking has historically been its pathway of choice. If standardisation under the AIA took the same route, we demonstrate how this would lead to a false sense of safety as the process is not infallible. Consensus tracking would furthermore push the need to solve unavoidable normative problems down the line. Instead of regulators, AI developers and/or users could define what, for example, fairness requires. By the institutional design of its AIA, the European Commission would have essentially kicked the ‘AI Ethics’ can down the road (...).

Abstract: The infrastructure of the Internet is based on algorithms that enable the use of search engines, social networks, and much more. Algorithms themselves may vary in functionality, but many of them have the potential to reinforce, accentuate, and systematize age-old prejudices, biases, and implicit assumptions of society. Awareness of algorithms thus becomes an issue of agency, public life, and democracy. Nonetheless, as research showed, people are lacking algorithm awareness. Therefore, this paper aims to investigate the extent to which people are aware of unethical artificial intelligence and what actions they can take against it (mitigation measures). A survey addressing these factors yielded 291 valid responses. To examine the data and the relationship between the constructs in the model, partial least square structural modeling (PLS-SEM) was applied using the Smart PLS 3 tool. The empirical results demonstrate that awareness of mitigation measures is influenced by the self-efficacy of the user. However, trust in the algorithmic platform has no significant influence. In addition, the explainability of an algorithmic platform has a significant influence on the user's self-efficacy and should therefore be considered when setting up the platform. The most frequently mentioned mitigation measures by survey participants are laws and regulations, various types of algorithm audits, and education and training. This work thus provides new empirical insights for researchers and practitioners in the field of ethical artificial intelligence.

Abstract: Trustworthy Artificial Intelligence (AI) is based on seven technical requirements sustained over three main pillars that should be met throughout the system’s entire life cycle: it should be (1) lawful, (2) ethical, and (3) robust, both from a technical and a social perspective. However, attaining truly trustworthy AI concerns a wider vision that comprises the trustworthiness of all processes and actors that are part of the system’s life cycle, and considers previous aspects from different lenses. A more holistic vision contemplates four essential axes: the global principles for ethical use and development of AI-based systems, a philosophical take on AI ethics, a risk-based approach to AI regulation, and the mentioned pillars and requirements (...).

Abstract: Data permeates nearly all spheres of society, and journalism is no exception to this since data has become a cornerstone of reality construction and perception. This contribution sets out to historicize the datafication processes in digital journalism and the way in which European institutions of media (self-)regulation have dealt with ethical issues regarding the use of data in algorithmic journalism in three areas: accountability, transparency, and privacy. The article shows that the process of datafication in journalism cannot be observed and analyzed in isolation, given that there is a double reflexivity between data-driven societal transformation processes and what happens in journalism. However, almost all press councils in Europe have so far ignored data-driven phenomena like algorithms or news automation. As a consequence, if self-regulators do not regulate, other institutions will, with the risk of news organizations being forced to make decisions on the grounds of regulatory frameworks that are not primarily intended for journalism.

Abstract: Healthcare organizations have realized that Artificial intelligence (AI) can provide a competitive edge through personalized patient experiences, improved patient outcomes, early diagnosis, augmented clinician capabilities, enhanced operational efficiencies, or improved medical service accessibility. However, deploying AI-driven tools in the healthcare ecosystem could be challenging. This paper categorizes AI applications in healthcare and comprehensively examines the challenges associated with deploying AI in medical practices at scale. As AI continues to make strides in healthcare, its integration presents various challenges, including production timelines, trust generation, privacy concerns, algorithmic biases, and data scarcity. The paper highlights that flawed business models and wrong workflows in healthcare practices cannot be rectified merely by deploying AI-driven tools. Healthcare organizations should re-evaluate root problems such as misaligned financial incentives (e.g., fee-for-service models), dysfunctional medical workflows (e.g., high rates of patient readmissions), poor care coordination between different providers, fragmented electronic health records systems, and inadequate patient education and engagement models in tandem with AI adoption (...).

Abstract: The utilization of artificial intelligence (AI) applications has experienced tremendous growth in recent years, bringing forth numerous benefits and conveniences. However, this expansion has also provoked ethical concerns, such as privacy breaches, algorithmic discrimination, security and reliability issues, transparency, and other unintended consequences. To determine whether a global consensus exists regarding the ethical principles that should govern AI applications and to contribute to the formation of future regulations, this paper conducts a meta-analysis of 200 governance policies and ethical guidelines for AI usage published by public bodies, academic institutions, private companies, and civil society organizations worldwide.

Abstract : Nous proposons un outil d’autorégulation pour la conception d’IA qui intègre des mesures sociétales telles que l’équité, l’interprétabilité et la confidentialité. Pour ce faire, nous créons une interface qui permet aux praticiens de l’IA (experts en mégadonnées/ data scientists ) de choisir visuellement l’algorithme d’apprentissage (Machine Learning/ML) qui correspond le mieux aux préférences éthiques des concepteurs d’IA. En utilisant une méthodologie de conception en design science (science du design), nous testons l’artefact sur des data scientists et montrons que l’interface est facile à utiliser, permet de mieux comprendre les enjeux éthiques de l’IA, génère des débats, rend les algorithmes plus éthiques et est opérationnelle pour la prise de décision (...).

Abstract: The swiftness of artificial intelligence (AI) progress in plant science begets relevant ethical questions with significant scientific and societal implications. Embracing a principled approach to regulation, ethics review and monitoring, and human-centric interpretable informed AI (HIAI), we can begin to navigate our voyage towards ethical and socially responsible AI. The swiftness of artificial intelligence (AI) progress in plant science begets relevant ethical questions with significant scientific and societal implications. Embracing a principled approach to regulation, ethics review and monitoring, and human-centric interpretable informed AI (HIAI), we can begin to navigate our way towards ethical and socially responsible AI.

Abstract: The launch of OpenAI's GPT-3 model in June 2020 began a new era for conversational chatbots. While there are chatbots that do not use artificial intelligence (AI), conversational chatbots integrate AI language models that allow for back-and-forth conversation between an AI system and a human user. GPT-3, since upgraded to GPT-4, harnesses a natural language processing technique called sentence embedding and allows for conversations with users that are more nuanced and realistic than before. The launch of this model came in the first few months of the COVID-19 pandemic, where increases in health care needs globally combined with social distancing measures made virtual medicine more relevant than ever(...).

Abstract: Other partners include global institutions (the United Nations and its cultural organization, UNESCO) and the Patrick J. McGovern Foundation in Boston, Massachusetts, which advises the Global AI Action Alliance of the World Economic Forum (see Supplementary information for co-developers and affiliations). Policy advisers also participated as observers, including representatives from the Organisation for Economic Co-operation and Development (OECD) and the European Commission. Set up a scientific body to audit AI systems An official body is needed to evaluate the safety and validity of generative AI systems, including bias and ethical issues in their use (see 'An auditor for generative AI'). The auditing body should be run in the same way as an international research institution - it should be interdisciplinary, with five to ten research groups that host specialists in computer science, behavioural science, psychology, human rights, privacy, law, ethics, science of science and philosophy.

Abstract: In this article, the authors reflected the legal, ethical and social aspects of the introduction of artificial intelligence in the field of medicine. The authors used the dialectical method to understand the problematic aspects of qualitative changes in the healthcare system of Ukraine in connection with the quantitative increase in the use of artificial intelligence technology. The system method contributed to determining the nature of the impact of the introduction of artificial intelligence on the transformation of the structural elements of legislation in the healthcare sector. Analytical and formal-logical methods were useful in the process of identifying legal and ethical and social problems from the introduction of artificial intelligence and providing proposals for their solution. Emphasis was placed on the current state of the legal regulation of artificial intelligence in Ukraine and the problems of a legal, ethical and social nature that need to be addressed in the process of its implementation. The authors came to the conclusion that Ukraine is now at the initial stage of introducing artificial intelligence into public life. The problem of the lack of legislative work to streamline public relations associated with the use of artificial intelligence has been identified. Proposals are provided that can help mitigate the risks from the introduction of artificial intelligence.

  • Deepfakes and Dishonesty; Flattery, Tobias ; Miller, Christian B.; Philosophy & technology, 2024-12, Vol.37 (4), p.120, Article 120

Abstract: Deepfakes raise various concerns: risks of political destabilization, depictions of persons without consent and causing them harms, erosion of trust in video and audio as reliable sources of evidence, and more. These concerns have been the focus of recent work in the philosophical literature on deepfakes. However, there has been almost no sustained philosophical analysis of deepfakes from the perspective of concerns about honesty and dishonesty. That deepfakes are potentially deceptive is unsurprising and has been noted. But under what conditions does the use of deepfakes fail to be honest? (...)

Abstract: This article poses a simple question: can AI lie? In response to this question, the article examines, as its point of inquiry, popular AI chatbots, such as, ChatGPT. In doing so, an examination of the psychoanalytic, philosophical, and technological significance of AI and its complexities are located in relation to the dynamics of truth, falsity, and deception. That is, by critically considering the chatbot’s ability to engage in natural language conversations and provide contextually relevant responses, it is argued that what separates the AI chatbot from anthropocentric debates, which allude to some form of conscious recognition on behalf of AI, is the importance of the lie – an importance which a psychoanalytic approach can reveal (...).

Abstract: The integration of large language models (LLMs) in medical education offers both opportunities and challenges. While these artificial intelligence (AI)-driven tools can enhance access to information and support critical thinking, they also pose risks like potential overreliance and ethical concerns. To ensure ethical use, students and instructors must recognize the limitations of LLMs, maintain academic integrity, and handle data cautiously, and instructors should prioritize content quality over AI detection methods. LLMs can be used as supplementary aids rather than primary educational resources, with a focus on enhancing accessibility and equity and fostering a culture of feedback (...).

Abstract: The emergence of increasingly capable artificial intelligence (AI) systems has raised concerns about the potential extreme risks associated with them. The issue has drawn substantial attention in academic literature and compelled legislators of regulatory frameworks like the European Union AI Act (AIA) to readapt them to the new paradigm. This paper examines whether the European Parliament’s draft of the AIA constitutes an appropriate approach to address the risks derived from frontier models. In particular, we discuss whether the AIA reflects the policy needs diagnosed by recent literature and determine if the requirements falling on providers of foundation models are appropriate, sufficient, and durable. We find that the provisions are generally adequate, but insufficiently defined in some areas and lacking in others. Finally, the AIA is characterized as an evolving framework whose durability will depend on the institutions’ ability to adapt to future progress.

Data Protection Notice   Cookie Policy & Inventory
EP Library Catalogue
Journals on all devices
Search the EP Library Catalogue
Newspapers on all devices