Interpretability and Transparency in Artificial Intelligence
In: in Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics (online edn, Oxford Academic, 10 Nov. 2021), https://doi.org/10.1093/oxfordhb/9780198857815.013.20
27 Ergebnisse
Sortierung:
In: in Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics (online edn, Oxford Academic, 10 Nov. 2021), https://doi.org/10.1093/oxfordhb/9780198857815.013.20
SSRN
In: Philosophy & technology, Band 32, Heft 1, S. 17-21
ISSN: 2210-5441
In: Philosophy & technology, Band 30, Heft 4, S. 475-494
ISSN: 2210-5441
In: Otto, P. and E. Gräf (eds) 3TH1CS - The Reinvention of Ethics in the Digital Age. 2017. iRights.Media, Berlin.
SSRN
In: Ethics and Information Technology, 2017
SSRN
Do we have a right to transparency when we use content personalization systems? Building on prior work in discrimination detection in data mining, I propose algorithm auditing as a compatible ethical duty for providers of content personalization systems to maintain the transparency of political discourse. I explore barriers to auditing that reveal the practical limitations on the ethical duties of service providers. Content personalization systems can function opaquely and resist auditing. However, the belief that highly complex algorithms, such as bots using machine learning, are incomprehensible to human users should not be an excuse to surrender high quality political discourse. Auditing is recommended as a way to map and redress algorithmic political exclusion in practice. However, the opacity of algorithmic decision making poses a significant challenge to the implementation of auditing.
BASE
In: https://doi.org/10.7916/d8-g10s-ka92
Big Data analytics and artificial intelligence (AI) draw non-intuitive and unverifiable inferences and predictions about the behaviors, preferences, and private lives of individuals. These inferences draw on highly diverse and feature-rich data of unpredictable value, and create new opportunities for discriminatory, biased, and invasive decision-making. Data protection law is meant to protect people's privacy, identity, reputation, and autonomy, but is currently failing to protect data subjects from the novel risks of inferential analytics. The legal status of inferences is heavily disputed in legal scholarship, and marked by inconsistencies and contradictions within and between the views of the Article 29 Working Party and the European Court of Justice (ECJ). This Article shows that individuals are granted little control or oversight over how their personal data is used to draw inferences about them. Compared to other types of personal data, inferences are effectively "economy class" personal data in the General Data Protection Regulation (GDPR). Data subjects' rights to know about (Articles 13–15), rectify (Article 16), delete (Article 17), object to (Article 21), or port (Article 20) personal data are significantly curtailed for inferences. The GDPR also provides insufficient protection against sensitive inferences (Article 9) or remedies to challenge inferences or important decisions based on them (Article 22(3)). This situation is not accidental. In standing jurisprudence the ECJ has consistently restricted the remit of data protection law to assessing the legitimacy of input personal data undergoing processing, and to rectify, block, or erase it. Critically, the ECJ has likewise made clear that data protection law is not intended to ensure the accuracy of decisions and decision-making processes involving personal data, or to make these processes fully transparent. Current policy proposals addressing privacy protection (the ePrivacy Regulation and the EU Digital Content Directive) and Europe's new Copyright Directive and Trade Secrets Directive also fail to close the GDPR's accountability gaps concerning inferences. This Article argues that a new data protection right, the "right to reasonable inferences," is needed to help close the accountability gap currently posed by "high risk inferences," meaning inferences drawn from Big Data analytics that damage privacy or reputation, or have low verifiability in the sense of being predictive or opinion-based while being used in important decisions. This right would require ex-ante justification to be given by the data controller to establish whether an inference is reasonable. This disclosure would address (1) why certain data form a normatively acceptable basis from which to draw inferences; (2) why these inferences are relevant and normatively acceptable for the chosen processing purpose or type of automated decision; and (3) whether the data and methods used to draw the inferences are accurate and statistically reliable. The ex-ante justification is bolstered by an additional ex-post mechanism enabling unreasonable inferences to be challenged.
BASE
In: Columbia Business Law Review, 2019(2)
SSRN
In: Law, governance and technology series volume 29
In: Law, Governance and Technology Ser. v.29
In: ProQuest Ebook Central
Intro -- Contents -- Contributors -- Introduction -- 1 Background -- 2 Big Data -- 3 Biomedical Big Data -- 4 Structure of the Volume -- 4.1 Part I: Balancing Individual and Collective Interests -- 4.2 Part II: Privacy and Data Protection -- 4.3 Part III: Consent -- 4.4 Part IV: Ethical Governance -- 4.5 Part V: Professionalism and Ethical Duties -- 4.6 Part VI: Foresight -- References -- Part I Balancing Individual and Collective Interests -- "Strictly Biomedical? Sketching the Ethics of the Big Data Ecosystem in Biomedicine" -- 1 The Chiaroscuro Portrait of Big Data -- 2 Typical Big Biomedical Data -- 3 Non-biomedical Big Data of Great Biomedical Value -- 3.1 Loyalty Cards Points -- 3.2 Social Media -- 3.3 Mobile Devices -- 4 The Digital Phenotype -- 5 Towards a New Ethical Framework -- 5.1 Vision for a New Framework -- 5.2 Design Requirements -- 5.3 Substantive Key Elements -- 5.3.1 (1) Ethical Use and Privacy -- 5.3.2 (2) Data Governance -- 5.3.3 (3) Transparency and Accountability -- 6 Conclusion -- References -- Using Transactional Big Data for Epidemiological Surveillance: Google Flu Trends and Ethical Implications of `Infodemiology' -- 1 Introduction -- 2 A Pragmatist Approach to Ethics -- 3 Historical Overview -- 3.1 Infodemiology: Covering `Supply' and `Demand' -- 3.2 Analysing Health Information Demand -- 4 Case Study: Google Flu Trends -- 4.1 Normative Assumptions, Justifications and Values -- 4.1.1 Epidemics of Fear -- 4.1.2 The `Innocent User' as Ideal Data Source -- 4.1.3 Privacy -- 4.2 Discourse Ethics -- 4.2.1 Institutional Context -- 4.2.2 Stakeholder Analysis -- 5 Conclusion -- References -- Denmark at a Crossroad? Intensified Data Sourcing in a Research Radical Country -- 1 Introduction -- 2 From Data Mining to Intensified Data Sourcing -- 3 Denmark: A Country at a Crossroad -- 3.1 The Register Infrastructure.
In: Gillis, R., Laux, J. and Mittelstadt, B. 2024. Trust and Trustworthiness in Artificial Intelligence. In: Paul, R., Carmel, E. and J. Cobbe (eds.), Handbook on Artificial Intelligence and Public Policy, Cheltenham Spa: Edward Elgar
SSRN
SSRN
In: Regulation & governance, Band 18, Heft 1, S. 3-32
ISSN: 1748-5991
AbstractIn its AI Act, the European Union chose to understand trustworthiness of AI in terms of the acceptability of its risks. Based on a narrative systematic literature review on institutional trust and AI in the public sector, this article argues that the EU adopted a simplistic conceptualization of trust and is overselling its regulatory ambition. The paper begins by reconstructing the conflation of "trustworthiness" with "acceptability" in the AI Act. It continues by developing a prescriptive set of variables for reviewing trust research in the context of AI. The paper then uses those variables for a narrative review of prior research on trust and trustworthiness in AI in the public sector. Finally, it relates the findings of the review to the EU's AI policy. Its prospects to successfully engineer citizen's trust are uncertain. There remains a threat of misalignment between levels of actual trust and the trustworthiness of applied AI.
In: Michigan Technology Law Review (2023)
SSRN
SSRN
In: Common Market Law Review, Band 58, Heft 3, S. 719-750
ISSN: 0165-0750
Online behavioural advertising (OBA) relies on inferential analytics to target consumers based on data about their online behaviour. While the technology can improve the matching of adverts with consumers' preferences, it also poses risks to consumer welfare as consumers face offer discrimination and the exploitation of their cognitive errors. The technology's risks are exacerbated by the market power of ad intermediaries. This article shows how the Unfair Commercial Practices Directive (UCPD) can protect consumers from behavioural exploitation by incorporating market power analysis. Drawing on current research in economic theory, it argues for applying a stricter average consumer test if the market for ad intermediaries is highly concentrated. This stricter test should neutralize negative effects of behavioural targeting on consumer welfare. The article shows how OBA can amount to a misleading action and/or a misleading omission under Articles 6 and 7 UCPD, as well as an aggressive practice under Article 8 UCPD. It further considers how the recent legislative proposals by the European Commission to enact a Digital Markets Act (DMA) and a Digital Services Act (DSA) may interact with the UCPD and the suggested stricter average consumer test.