Does offline political segregation affect the filter bubble? An empirical analysis of information diversity for Dutch and Turkish Twitter users
In: Computers in human behavior, Band 41, S. 405-415
ISSN: 0747-5632
3 Ergebnisse
Sortierung:
In: Computers in human behavior, Band 41, S. 405-415
ISSN: 0747-5632
Students and researchers of media and communication sciences study the role of media in our society. They frequently search through media archives to manually select items that cover a certain event. When this is done for large time spans and across media-outlets, this task can however be challenging and laborious. Therefore, up until now the focus of researchers has been on manual and qualitative analyses of newspaper coverage. PoliMedia aims to stimulate and facilitate large-scale, cross-media analysis of the coverage of political events. We focus on the meetings of the Dutch parliament, and provide automatically generated links between the transcripts of those meetings, newspaper articles, including their original lay-out on the page, and radio bulletins. Via the portal at www.polimedia.nl researchers can search through the debates and find related media coverage in two media- outlets, facilitating a more efficient search process and qualitative analyses of the media coverage. Furthermore, the generated links are available via a SPARQL endpoint at data.polimedia.nl allowing quantitative analyses with complex, structured queries that are not covered by the search functionality of the portal, thus challenging the student to go across the academic borders and enter fields that previously have been neglected.
BASE
In: AI and ethics, Band 3, Heft 1, S. 241-255
ISSN: 2730-5961
AbstractHow can humans remain in control of artificial intelligence (AI)-based systems designed to perform tasks autonomously? Such systems are increasingly ubiquitous, creating benefits - but also undesirable situations where moral responsibility for their actions cannot be properly attributed to any particular person or group. The concept of meaningful human control has been proposed to address responsibility gaps and mitigate them by establishing conditions that enable a proper attribution of responsibility for humans; however, clear requirements for researchers, designers, and engineers are yet inexistent, making the development of AI-based systems that remain under meaningful human control challenging. In this paper, we address the gap between philosophical theory and engineering practice by identifying, through an iterative process of abductive thinking, four actionable properties for AI-based systems under meaningful human control, which we discuss making use of two applications scenarios: automated vehicles and AI-based hiring. First, a system in which humans and AI algorithms interact should have an explicitly defined domain of morally loaded situations within which the system ought to operate. Second, humans and AI agents within the system should have appropriate and mutually compatible representations. Third, responsibility attributed to a human should be commensurate with that human's ability and authority to control the system. Fourth, there should be explicit links between the actions of the AI agents and actions of humans who are aware of their moral responsibility. We argue that these four properties will support practically minded professionals to take concrete steps toward designing and engineering for AI systems that facilitate meaningful human control.