"This book considers morality as a dynamic ecosystem that can change in response to its sociomaterial embedding. It particularly explores the role of technology in mediating the meaning of human values and studies the implications of this capacity for the use, design, and governance of technologies"--
AbstractThis paper critically examines the political implications of Large Language Models (LLMs), focusing on the individual and collective ability to engage in political practices. The advent of AI-based chatbots powered by LLMs has sparked debates on their democratic implications. These debates typically focus on how LLMS spread misinformation and thus hinder the evaluative skills of people essential for informed decision-making and deliberation. This paper suggests that beyond the spread of misinformation, the political significance of LLMs extends to the core of political subjectivity and action. It explores how LLMs contribute to political de-skilling by influencing the capacities of critical engagement and collective action. Put differently, we explore how LLMs shape political subjectivity. We draw from Arendt's distinction between speech and language and Foucault's work on counter-conduct to articulate in what sense LLMs give rise to political de-skilling, and hence pose a threat to political subjectivity. The paper concludes by considering how to reconcile the impact of LLMs on political agency without succumbing to technological determinism, and by pointing to how the practice of parrhesia enables one to form one's political subjectivity in relation to LLMs.
In this paper, we examine the qualitative moral impact of machine learning-based clinical decision support systems in the process of medical diagnosis. To date, discussions about machine learning in this context have focused on problems that can be measured and assessed quantitatively, such as by estimating the extent of potential harm or calculating incurred risks. We maintain that such discussions neglect the qualitative moral impact of these technologies. Drawing on the philosophical approaches of technomoral change and technological mediation theory, which explore the interplay between technologies and morality, we present an analysis of concerns related to the adoption of machine learning-aided medical diagnosis. We analyze anticipated moral issues that machine learning systems pose for different stakeholders, such as bias and opacity in the way that models are trained to produce diagnoses, changes to how health care providers, patients, and developers understand their roles and professions, and challenges to existing forms of medical legislation. Albeit preliminary in nature, the insights offered by the technomoral change and the technological mediation approaches expand and enrich the current discussion about machine learning in diagnostic practices, bringing distinct and currently underexplored areas of concern to the forefront. These insights can contribute to a more encompassing and better informed decision-making process when adapting machine learning techniques to medical diagnosis, while acknowledging the interests of multiple stakeholders and the active role that technologies play in generating, perpetuating, and modifying ethical concerns in health care.
Following the "control dilemma" of Collingridge, influencing technological developments is easy when their implications are not yet manifest, yet once we know these implications, they are difficult to change. This article revisits the Collingridge dilemma in the context of contemporary ethics of technology, when technologies affect both society and the value frameworks we use to evaluate them. Early in its development, we do not know how a technology will affect the value frameworks from which it will be evaluated, while later, when the implications for society and morality are clearer, it is more difficult to guide the development in a desirable direction. Present-day approaches to this dilemma focus on methods to anticipate ethical impacts of a technology ("technomoral scenarios"), being too speculative to be reliable, or on ethically regulating technological developments ("sociotechnical experiments"), discarding anticipation of the future implications. We present the approach of technological mediation as an alternative that focuses on the dynamics of the interaction between technologies and human values. By investigating online discussions about Google Glass, we examine how people articulate new meanings of the value of privacy. This study of "morality in the making" allows developing a modest and empirically informed form of anticipation.
Abstract We propose a pragmatist account of value change that helps to understand how and why values sometimes change due to technological developments. Inspired by John Dewey's writings on value, we propose to understand values as evaluative devices that carry over from earlier experiences and that are to some extent shared in society. We discuss the various functions that values fulfil in moral inquiry and propose a conceptual framework that helps to understand value change as the interaction between three manifestations of value distinguished by Dewey, i.e., "immediate value," "values as the result of inquiry" and "generalized values." We show how this framework helps to distinguish three types of value change: value dynamism, value adaptation, and value emergence, and we illustrate these with examples from the domain of technology. We argue that our account helps to better understand how technology may induce value change, namely through the creation of what Dewey calls indeterminate situations, and we show how our account can integrate several insights on (techno)moral change offered by other authors.
AbstractThis paper focuses on two examples of the introduction and use of COVID‐19 contact tracing apps in The Netherlands (CoronaMelder) and Belgium (Coronalert). It aims to offer a critical, sociotechnical perspective on tracing apps to understand how social, technical, and institutional dimensions form the ingredients for increasing surveillance. While it is still too early to gauge the implications of surveillance‐related initiatives in the fight against COVID‐19, the "technology theatre" put in place worldwide has already shown that very little can be done to prevent the deployment of technologies, even if their effectiveness is yet to be determined. The context‐specific perspective outlined here offers insights into the interests of many different actors involved in the technology theatre, for instance, the corporate interest in sociotechnical frameworks (both apps rely on the Google/Apple exposure notifications application programming interface). At the same time, our approach seeks to go beyond dystopian narratives that do not consider important sociocultural dimensions, such as choices made during app development and implementation to mitigate potential negative impacts on privacy.
AbstractOur democratic systems have been challenged by the proliferation of artificial intelligence (AI) and its pervasive usage in our society. For instance, by analyzing individuals' social media data, AI algorithms may develop detailed user profiles that capture individuals' specific interests and susceptibilities. These profiles are leveraged to derive personalized propaganda, with the aim of influencing individuals toward specific political opinions. To address this challenge, the value of privacy can serve as a bridge, as having a sense of privacy can create space for people to reflect on their own political stance prior to making critical decisions, such as voting for an election. In this paper, we explore a novel approach by harnessing the potential of AI to enhance the privacy of social-media data. By leveraging adversarial machine learning, i.e., "AI versus AI," we aim to fool AI-generated user profiles to help users hold a stake in resisting political profiling and preserve the deliberative nature of their political choices. More specifically, our approach probes the conceptual possibility of infusing people's social media data with minor alterations that can disturb user profiling, thereby reducing the efficacy of the personalized influences generated by political actors. Our study delineates the boundary of ethical and practical implications associated with this 'AI versus AI' approach, highlighting the factors for the AI and ethics community to consider in facilitating deliberative decision-making toward democratic elections.