When Too Good is Too Much: Social Incentives and Job Selection
In: IZA Discussion Paper No. 12905
10 Ergebnisse
Sortierung:
In: IZA Discussion Paper No. 12905
SSRN
SSRN
In: Experimental Economics (forthcoming)
SSRN
In: IZA Discussion Paper No. 13201
SSRN
Working paper
SSRN
SSRN
In: IZA Discussion Paper No. 7625
SSRN
In: The economic journal: the journal of the Royal Economic Society
ISSN: 1468-0297
Artificial intelligence increasingly becomes an indispensable advisor. New ethical concerns arise if artificial intelligence persuades people to behave dishonestly. In an experiment, we study how artificial intelligence advice (generated by a natural language processing algorithm) affects (dis)honesty, compare it to equivalent human advice and test whether transparency about the advice source matters. We find that dishonesty-promoting advice increases dishonesty, whereas honesty-promoting advice does not increase honesty. This is the case for both artificial intelligence and human advice. Algorithmic transparency, a commonly proposed policy to mitigate artificial intelligence risks, does not affect behaviour. The findings mark the first steps towards managing artificial intelligence advice responsibly.
World Affairs Online
In: The economic journal: the journal of the Royal Economic Society, Band 134, Heft 658, S. 766-784
ISSN: 1468-0297
Abstract
Artificial intelligence increasingly becomes an indispensable advisor. New ethical concerns arise if artificial intelligence persuades people to behave dishonestly. In an experiment, we study how artificial intelligence advice (generated by a natural language processing algorithm) affects (dis)honesty, compare it to equivalent human advice and test whether transparency about the advice source matters. We find that dishonesty-promoting advice increases dishonesty, whereas honesty-promoting advice does not increase honesty. This is the case for both artificial intelligence and human advice. Algorithmic transparency, a commonly proposed policy to mitigate artificial intelligence risks, does not affect behaviour. The findings mark the first steps towards managing artificial intelligence advice responsibly.
We analyze how the team formation process influences the ability composition and performance of teams, showing how self-selection and random assignment affect team performance for different tasks in two natural field experiments. We identify the collaboration intensity of the task as the key driver of the effect of self-selection on team performance. We find that when the task requires low collaborative efforts, the team performance of self-selected teams is significantly inferior to that of randomly assigned teams. When the task involves more collaborative efforts, self-selected teams tend to outperform randomly assigned teams. We observe assortative matching in self-selected teams, with subjects more likely to match with those of similar ability and the same gender.