Algorithmic Fairness
In: Annual Review of Financial Economics, Band 15, S. 565-593
281 Ergebnisse
Sortierung:
In: Annual Review of Financial Economics, Band 15, S. 565-593
SSRN
In: Virginia Public Law and Legal Theory Research Paper No. 2019-39
SSRN
In: 48 Florida State Law Review 509 (2021)
SSRN
In: Philosophy & technology, Band 33, Heft 2, S. 225-244
ISSN: 2210-5441
In: Philosophy & technology, Band 38, Heft 1
ISSN: 2210-5441
AbstractThe "impossibility results" in algorithmic fairness suggest that a predictive model cannot fully meet two common fairness criteria – sufficiency and separation – except under extraordinary circumstances. These findings have sparked a discussion on fairness in algorithms, prompting debates over whether predictive models can avoid unfair discrimination based on protected attributes, such as ethnicity or gender. As shown by Otto Sahlgren, however, the discussion of the impossibility results would gain from importing some of the tools developed in the philosophical literature on feasibility. Utilizing these tools, Sahlgren sketches a cautiously optimistic view of how algorithmic fairness can be made feasible in restricted local decision-making. While we think it is a welcome move to inject the literature on feasibility into the debate on algorithmic fairness, Sahlgren says very little about what are the general gains of bringing in feasibility considerations in theorizing algorithmic fairness. How, more precisely, does it help us make assessments about fairness in algorithmic decision-making? This is what is addressed in this Reply. More specifically, our two-fold argument is that feasibility plays an important but limited role for algorithmic fairness. We end by offering a sketch of a framework, which may be useful for theorizing feasibility in algorithmic fairness.
In: Columbia Business School Research Paper
SSRN
Working paper
In: Philosophy and public affairs, Band 51, Heft 2, S. 166-190
ISSN: 1088-4963
In: Philosophy & technology, Band 36, Heft 1
ISSN: 2210-5441
In: AI and ethics
ISSN: 2730-5961
AbstractThe increasing use of algorithms in predictive policing has raised concerns regarding the potential amplification of societal biases. This study adopts a two-phase approach, encompassing a systematic review and the mitigation of age-related biases in predictive policing. Our systematic review identifies a variety of fairness strategies in existing literature, such as domain knowledge, likelihood function penalties, counterfactual reasoning, and demographic segmentation, with a primary focus on racial biases. However, this review also highlights significant gaps in addressing biases related to other protected attributes, including age, gender, and socio-economic status. Additionally, it is observed that police actions are a major contributor to model discrimination in predictive policing. To address these gaps, our empirical study focuses on mitigating age-related biases within the Chicago Police Department's Strategic Subject List (SSL) dataset used in predicting the risk of being involved in a shooting incident, either as a victim or an offender. We introduce Conditional Score Recalibration (CSR), a novel bias mitigation technique, alongside the established Class Balancing method. CSR involves reassessing and adjusting risk scores for individuals initially assigned moderately high-risk scores, categorizing them as low risk if they meet three criteria: no prior arrests for violent offenses, no previous arrests for narcotic offenses, and no involvement in shooting incidents. Our fairness assessment, utilizing metrics like Equality of Opportunity Difference, Average Odds Difference, and Demographic Parity, demonstrates that this approach significantly improves model fairness without sacrificing accuracy.
In: Oxford review of economic policy, Band 37, Heft 3, S. 585-617
ISSN: 1460-2121
AbstractThe use of machine learning as an input into decision-making is on the rise, owing to its ability to uncover hidden patterns in large data and improve prediction accuracy. Questions have been raised, however, about the potential distributional impacts of these technologies, with one concern being that they may perpetuate or even amplify human biases from the past. Exploiting detailed credit file data for 800,000 UK borrowers, we simulate a switch from a traditional (logit) credit scoring model to ensemble machine-learning methods. We confirm that machine-learning models are more accurate overall. We also find that they do as well as the simpler traditional model on relevant fairness criteria, where these criteria pertain to overall accuracy and error rates for population subgroups defined along protected or sensitive lines (gender, race, health status, and deprivation). We do observe some differences in the way credit-scoring models perform for different subgroups, but these manifest under a traditional modelling approach and switching to machine learning neither exacerbates nor eliminates these issues. The paper discusses some of the mechanical and data factors that may contribute to statistical fairness issues in the context of credit scoring.
In: Philosophy & technology, Band 37, Heft 4
ISSN: 2210-5441
AbstractThe now well-known impossibility results of algorithmic fairness demonstrate that an error-prone predictive model cannot simultaneously satisfy two plausible conditions for group fairness apart from exceptional circumstances where groups exhibit equal base rates. The results sparked, and continue to shape, lively debates surrounding algorithmic fairness conditions and the very possibility of building fair predictive models. This article, first, highlights three underlying points of disagreement in these debates, which have led to diverging assessments of the feasibility of fairness in prediction-based decision-making. Second, the article explores whether and in what sense fairness as defined by the conjunction of the implicated fairness conditions is (un)attainable. Drawing on philosophical literature on the concept of feasibility and the role of feasibility in normative theory, I outline a cautiously optimistic argument for the diachronic feasibility of fairness. In line with recent works on the topic, I argue that fairness can be made possible through collective efforts to eliminate inequalities that feed into local decision-making procedures.
In: Synthese: an international journal for epistemology, methodology and philosophy of science, Band 201, Heft 6
ISSN: 1573-0964
AbstractThis paper examines racial discrimination and algorithmic bias in predictive policing algorithms (PPAs), an emerging technology designed to predict threats and suggest solutions in law enforcement. We firstdescribewhat discrimination is in a case study of Chicago's PPA. We thenexplaintheir causes with Broadbent's contrastive model of causation and causal diagrams. Based on the cognitive science literature, we also explain why fairness is not an objective truth discoverable in laboratories but has context-sensitive social meanings that need to be negotiated through democratic processes. With the above analysis, we nextpredictwhy some recommendations given in the bias reduction literature are not as effective as expected. Unlike the cliché highlighting equal participation for all stakeholders in predictive policing, we emphasize power structures to avoid hermeneutical lacunae. Finally, we aim tocontrolPPA discrimination by proposing a governance solution—a framework of a social safety net.
In: AI & society: the journal of human-centred systems and machine intelligence
ISSN: 1435-5655
In: Philosophy & technology, Band 34, Heft 4, S. 1803-1817
ISSN: 2210-5441
AbstractModern society makes extensive use of automated algorithmic decisions, fueled by advances in artificial intelligence. However, since these systems are not perfect, questions about fairness are increasingly investigated in the literature. In particular, many authors take a Rawlsian approach to algorithmic fairness. This article aims to identify some complications with this approach: Under which circumstances can Rawls's original position reasonably be applied to algorithmic fairness decisions? First, it is argued that there are important differences between Rawls's original position and a parallel algorithmic fairness original position with respect to risk attitudes. Second, it is argued that the application of Rawls's original position to algorithmic fairness faces a boundary problem in defining relevant stakeholders. Third, it is observed that the definition of the least advantaged, necessary for applying the difference principle, requires some attention in the context of algorithmic fairness. Finally, it is argued that appropriate deliberation in algorithmic fairness contexts often require more knowledge about probabilities than the Rawlsian original position allows. Provided that these complications are duly considered, the thought-experiment of the Rawlsian original position can be useful in algorithmic fairness decisions.
In: Philosophy and public affairs, Band 50, Heft 2, S. 239-266
ISSN: 1088-4963