Suchergebnisse
Filter
Format
Medientyp
Sprache
Weitere Sprachen
Jahre
5117832 Ergebnisse
Sortierung:
SSRN
Understanding Accountability in Algorithmic Supply Chains
In: 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT '23)
SSRN
AI as a Legal Person?
In: "ARTIFICIAL INTELLIGENCE & INTELLECTUAL PROPERTY" edited by Professors Reto Hilty & Kung-Chung Liu to be published by OUP in 2020
SSRN
Working paper
Conscious Empathic AI in Service
In: Journal of service research, Band 25, Heft 4, S. 549-564
ISSN: 1552-7379
Recent advances in artificial intelligence (AI) have achieved human-scale speed and accuracy for classification tasks. Current systems do not need to be conscious to recognize patterns and classify them. However, for AI to advance to the next level, it needs to develop capabilities such as metathinking, creativity, and empathy. We contend that such a paradigm shift is possible through a fundamental change in the state of artificial intelligence toward consciousness, similar to what took place for humans through the process of natural selection and evolution. To that end, we propose that consciousness in AI is an emergent phenomenon that primordially appears when two machines cocreate their own language through which they can recall and communicate their internal state of time-varying symbol manipulation. Because, in our view, consciousness arises from the communication of inner states, it leads to empathy. We then provide a link between the empathic quality of machines and better service outcomes associated with empathic human agents that can also lead to accountability in AI services.
AI Service and Emotion
In: Journal of service research, Band 25, Heft 4, S. 499-504
ISSN: 1552-7379
AI in service can be for routine mechanical tasks, analytical thinking tasks, or empathetic feeling tasks. We provide a conceptual framework for the customer, firm, and interactional use of AI for empathetic tasks at the micro-, meso-, and macro-levels. Emotions resulting from AI service interactions can include basic emotions (e.g., joy, sadness, and fear), self-conscious emotions (e.g., pride, guilt, embarrassment), and moral emotions (e.g., contempt, righteous anger, social disgust). These emotions are mostly likely to occur during frontline interactions in which both firms and customers use AI, a phenomenon called "AI as customer." The analysis level of AI service and emotion can be at the macro-level in which AI is transforming the service economy into a feeling economy, at the meso-level in which firms can use "thoughtful AI" to make the employees' and customers' lives a little bit better by brightening their days, and at the micro-level in which customers can experience basic, self-conscious, and moral emotions from interactions with service AI.
AI innovation in services marketing
In: Advances in marketing, customer relationship management, and e-services (AMCRMES) book series
In: Premier reference source
"This book explores the profound implications of AI for the services industry and its impact on consumer behavior. As AI continues to reshape the way services are delivered, experienced, and optimized, understanding the evolving role of services marketing is essential for businesses, academics, and practitioners alike"--
Recasting Service Quality for AI-Based Service
In: Australasian marketing journal: AMJ ; official journal of the Australia-New Zealand Marketing Academy (ANZMAC), Band 30, Heft 4, S. 297-312
Artificial intelligence service agents (AISA), such as chatbots and virtual assistants, are becoming increasingly pervasive in service. Research to date has not adequately addressed how the unique nature of AISA shape consumers' service quality expectations. A deeper understanding of AISA service quality is important for their successful deployment in the service sector. To address this gap, we reviewed marketing and information systems literatures and conducted qualitative in-depth interviews with 37 informants, inclusive of 28 AISA users and nine AISA experts. We developed a conceptual framework for how consumers use and evaluate AISA. Twelve service quality dimensions emerged from the qualitative evidence representing AISA service quality, two of which align with AISA's unique characteristics. The study extends the service quality theory to a new context and offers fresh insights for theory and practice. It culminates with a research agenda to advance research on AISA service quality.
AI as the Court: Assessing AI Deployment in Civil Cases
In: E. Themeli and S. Philipsen, AI as the Court: Assessing AI Deployment in Civil Cases, in K. Benyekhlef (ed), AI and Law. A Critical Overview, Éditions Thémis 2021, p. 213-232
SSRN
The World as a Readymade: A Conversation with Ai Weiwei
Ai Weiwei positions himself first and foremost as a thinker, driven by curiosity and even selfishness, and not shying away from ridicule. Through immersion and direct response to different, unfamiliar conditions, he aims to defamiliarize pre-set thinking, not letting himself be trapped by rationality and led by simplified, predetermined conclusions about the world. Despite the self-proclaimed selfishness at their core, Ai's artistic acts become selfless through resonance, inviting the viewer into his thought experiments with the world, which he engages with as if the world were a readymade. This conversation departed from the transnational film Tree (2021), where Ai meticulously documents the work of Brazilian and Chinese artisans in creating his 32-metre iron sculpture Pequi Tree (2018–2020). We began with political curiosity as a creative driver for the artist, the influence of Duchamp and Warhol, and the choice of the audiovisual medium to reflect reality. The conversation branched out to consider aesthetics, tying the issue of aestheticization to Ai's role as a public intellectual, from an earlier refusal of aesthetics or 'beautification' in the interest of unmediated transparency to the realization that new aesthetics are needed for new publics. ; info:eu-repo/semantics/publishedVersion
BASE
Engaged to a Robot? The Role of AI in Service
In: Journal of service research, Band 24, Heft 1, S. 30-41
ISSN: 1552-7379
This article develops a strategic framework for using artificial intelligence (AI) to engage customers for different service benefits. This framework lays out guidelines of how to use different AIs to engage customers based on considerations of nature of service task, service offering, service strategy, and service process. AI develops from mechanical, to thinking, and to feeling. As AI advances to a higher intelligence level, more human service employees and human intelligence (HI) at the intelligence levels lower than that level should be used less. Thus, at the current level of AI development, mechanical service should be performed mostly by mechanical AI, thinking service by both thinking AI and HI, and feeling service mostly by HI. Mechanical AI should be used for standardization when service is routine and transactional, for cost leadership, and mostly at the service delivery stage. Thinking AI should be used for personalization when service is data-rich and utilitarian, for quality leadership, and mostly at the service creation stage. Feeling AI should be used for relationalization when service is relational and high touch, for relationship leadership, and mostly at the service interaction stage. We illustrate various AI applications for the three major AI benefits, providing managerial guidelines for service providers to leverage the advantages of AI as well as future research implications for service researchers to investigate AI in service from modeling, consumer, and policy perspectives.
SSRN
Working paper
Multiplicity as an AI Governance Principle
SSRN
AI hype as a cyber security risk: the moral responsibility of implementing generative AI in business
In: AI and ethics
ISSN: 2730-5961
AbstractThis paper examines the ethical obligations companies have when implementing generative Artificial Intelligence (AI). We point to the potential cyber security risks companies are exposed to when rushing to adopt generative AI solutions or buying into "AI hype". While the benefits of implementing generative AI solutions for business have been widely touted, the inherent risks associated have been less well publicised. There are growing concerns that the race to integrate generative AI is not being accompanied by adequate safety measures. The rush to buy into the hype of generative AI and not fall behind the competition is potentially exposing companies to broad and possibly catastrophic cyber-attacks or breaches. In this paper, we outline significant cyber security threats generative AI models pose, including potential 'backdoors' in AI models that could compromise user data or the risk of 'poisoned' AI models producing false results. In light of these the cyber security concerns, we discuss the moral obligations of implementing generative AI into business by considering the ethical principles of beneficence, non-maleficence, autonomy, justice, and explicability. We identify two examples of ethical concern, overreliance and over-trust in generative AI, both of which can negatively influence business decisions, leaving companies vulnerable to cyber security threats. This paper concludes by recommending a set of checklists for ethical implementation of generative AI in business environment to minimise cyber security risk based on the discussed moral responsibilities and ethical concern.
A Scholarly Definition of Artificial Intelligence (AI): Advancing AI as a Conceptual Framework in Communication Research
In: Political communication: an international journal, Band 41, Heft 2, S. 317-334
ISSN: 1091-7675
AI ethics as subordinated innovation network
In: AI & SOCIETY
Abstract AI ethics is proposed, by the Big Tech companies which lead AI research and development, as the cure for diverse social problems posed by the commercialization of data-intensive technologies. It aims to reconcile capitalist AI production with ethics. However, AI ethics is itself now the subject of wide criticism; most notably, it is accused of being no more than "ethics washing" a cynical means of dissimulation for Big Tech, while it continues its business operations unchanged. This paper aims to critically assess, and go beyond the ethics washing thesis. I argue that AI ethics is indeed ethics washing, but not only that. It has a more significant economic function for Big Tech. To make this argument I draw on the theory of intellectual monopoly capital. I argue that ethics washing is better understood as a subordinated innovation network: a dispersed network of contributors beyond Big Tech's formal employment whose research is indirectly planned by Big Tech, which also appropriates its results. These results are not intended to render AI more ethical, but rather to advance the business processes of data-intensive capital. Because the parameters of AI ethics are indirectly set in advance by Big tech, the ostensible goal that AI ethics sets for itself—to resolve the contradiction between business and ethics—is in fact insoluble. I demonstrate this via an analysis of the latest trend in AI ethics: the operationalization of ethical principles.