Asemic Defamation, or, the Death of the AI Speaker
In: First Amendment Law Review, Volume 22
25 results
Sort by:
In: First Amendment Law Review, Volume 22
SSRN
SSRN
SSRN
In: 21 Duke Law & Technology Review, 116 (2023)
SSRN
SSRN
Predictive algorithms are increasingly being deployed in a variety of settings to determine legal status. Algorithmic predictions have been used to determine provision of health care and social services, to allocate state resources, and to anticipate criminal behavior or activity. Further applications have been proposed to determine civil and criminal liability or to "personalize" legal default rules. Deployment of such artificial intelligence systems has properly raised questions of algorithmic bias, fairness, transparency, and due process. But little attention has been paid to the known sociological costs of using predictive algorithms to determine legal status. A large and growing social science literature teaches the effects of "algorithmic living," documenting how humans interact with machine generated assessments. Many of these interactions are socially detrimental, and such corrosive effects are greatly amplified by the increasing speed and ubiquity of digitally automated algorithmic systems. In this paper I link the sociological and legal analysis of AI, highlighting the reflexive social processes that are engaged by algorithmic metrics. This paper examines these overlooked social effects of predictive legal algorithms, and contributes to the literature a vital fundamental but missing critique of such analytics. Specifically, this paper shows how the problematic social effects of algorithmic legal metrics extend far beyond the concerns about accuracy that have thus far dominated critiques of such metrics. Second, it demonstrates that corrective governance mechanisms such as enhanced due process or transparency will be inadequate to remedy such corrosive effects, and that some such remedies, such as transparency, may actually exacerbate the worst effects of algorithmic governmentality. Third, the paper shows that the application of algorithmic metrics to legal decisions aggravates the latent tensions between equity and autonomy in liberal institutions, undermining democratic values in a manner and on a scale not previously experienced by human societies. Illuminating these effects casts new light on the inherent social costs of AI metrics, particularly the perverse effects of deploying algorithms in legal systems.
BASE
In: 106 Minnesota Law Review Headnotes 270 (2022)
SSRN
In: 105 Minnnesota Law Review Headnotes 301 (2021)
SSRN
In: Duke Journal of Constitutional Law & Public Policy, Forthcoming
SSRN
In: IIC - International Review of Intellectual Property and Competition Law, Volume 45, Issue 8, p. 865-867
ISSN: 2195-0237
In: IIC - International Review of Intellectual Property and Competition Law, Volume 44, Issue 7, p. 747-749
ISSN: 2195-0237
In: Regulation, p. 20, Winter 2012-2013
SSRN
In: Regulation, Volume 35, Issue 4
SSRN
In: Philosophy & technology, Volume 24, Issue 4, p. 437-454
ISSN: 2210-5441
SSRN