Affective Computing
In: Privacy in Germany: PinG ; Datenschutz und Compliance, Heft 1
ISSN: 2196-9817
91 Ergebnisse
Sortierung:
In: Privacy in Germany: PinG ; Datenschutz und Compliance, Heft 1
ISSN: 2196-9817
In: Journal of enterprise information management: an international journal, Band 34, Heft 5, S. 1551-1575
ISSN: 1758-7409
PurposeDecision-making in human beings is affected by emotions and sentiments. The affective computing takes this into account, intending to tailor decision support to the emotional states of people. However, the representation and classification of emotions is a very challenging task. The study used customized methods of deep learning models to aid in the accurate classification of emotions and sentiments.Design/methodology/approachThe present study presents affective computing model using both text and image data. The text-based affective computing was conducted on four standard datasets using three deep learning customized models, namely LSTM, GRU and CNN. The study used four variants of deep learning including the LSTM model, LSTM model with GloVe embeddings, Bi-directional LSTM model and LSTM model with attention layer.FindingsThe result suggests that the proposed method outperforms the earlier methods. For image-based affective computing, the data was extracted from Instagram, and Facial emotion recognition was carried out using three deep learning models, namely CNN, transfer learning with VGG-19 model and transfer learning with ResNet-18 model. The results suggest that the proposed methods for both text and image can be used for affective computing and aid in decision-making.Originality/valueThe study used deep learning for affective computing. Earlier studies have used machine learning algorithms for affective computing. However, the present study uses deep learning for affective computing.
In: Public culture, Band 34, Heft 1, S. 21-45
ISSN: 1527-8018
AbstractMuch attention to affective computing has focused on its alleged ability to "tap into human affects," a trope also foundational to broader theorizations about big-data surveillance. What remains understudied and undertheorized is affective computing's social life, where interested parties contest and collude on its deployment. This essay traces how such portable technologies as sentiment analysis and "like" buttons wound up redefining collective action in China, which partly explains the conservative turn observed in Chinese online cultures since the mid-2010s. It unpacks affective computing's ambient politics — the fraught processes whereby social actors aggressively repackage, reinterpret, and remediate these technologies to fit their agendas, changing social standards for denoting emotions along the way. This essay calls to reorient critical analysis of affective computing away from its design epistemics to its ambient politics and, in parallel, to shift the focus from interiorized subjects to conditions of collective existence.
In: IEEE technology and society magazine: publication of the IEEE Society on Social Implications of Technology, Band 31, Heft 4, S. 22-29
ISSN: 0278-0097
In: NEUCOM-D-24-06785
SSRN
Affective computing is an emerging interdisciplinary research field bringing together researchers and practitioners from various fields, ranging from artificial intelligence, natural language processing, to cognitive and social sciences. With the proliferation of videos posted online (e.g., on YouTube, Facebook, Twitter) for product reviews, movie reviews, political views, and more, affective computing research has increasingly evolved from conventional unimodal analysis to more complex forms of multimodal analysis. This is the primary motivation behind our first of its kind, comprehensive literature review of the diverse field of affective computing. Furthermore, existing literature surveys lack a detailed discussion of state of the art in multimodal affect analysis frameworks, which this review aims to address. Multimodality is defined by the presence of more than one modality or channel, e.g., visual, audio, text, gestures, and eye gage. In this paper, we focus mainly on the use of audio, visual and text information for multimodal affect analysis, since around 90% of the relevant literature appears to cover these three modalities. Following an overview of different techniques for unimodal affect analysis, we outline existing methods for fusing information from different modalities. As part of this review, we carry out an extensive study of different categories of state-of-the-art fusion techniques, followed by a critical analysis of potential performance improvements with multimodal analysis compared to unimodal analysis. A comprehensive overview of these two complementary fields aims to form the building blocks for readers, to better understand this challenging and exciting research field.
BASE
Affective computing is an emerging interdisciplinary research field bringing together researchers and practitioners from various fields, ranging from artificial intelligence, natural language processing, to cognitive and social sciences. With the proliferation of videos posted online (e.g., on YouTube, Facebook, Twitter) for product reviews, movie reviews, political views, and more, affective computing research has increasingly evolved from conventional unimodal analysis to more complex forms of multimodal analysis. This is the primary motivation behind our first of its kind, comprehensive literature review of the diverse field of affective computing. Furthermore, existing literature surveys lack a detailed discussion of state of the art in multimodal affect analysis frameworks, which this review aims to address. Multimodality is defined by the presence of more than one modality or channel, e.g., visual, audio, text, gestures, and eye gage. In this paper, we focus mainly on the use of audio, visual and text information for multimodal affect analysis, since around 90% of the relevant literature appears to cover these three modalities. Following an overview of different techniques for unimodal affect analysis, we outline existing methods for fusing information from different modalities. As part of this review, we carry out an extensive study of different categories of state-of-the-art fusion techniques, followed by a critical analysis of potential performance improvements with multimodal analysis compared to unimodal analysis. A comprehensive overview of these two complementary fields aims to form the building blocks for readers, to better understand this challenging and exciting research field.
BASE
In: AI and ethics, Band 3, Heft 3, S. 937-946
ISSN: 2730-5961
AbstractAutomatic prediction of human attributions of valence and arousal using facial recognition technologies can improve human–computer and human–robot interaction. However, data protection has become an issue of great concern in affect recognition using facial images, as the facial identities of people (i.e. recognising who a person is) could be exposed in the process. For instance, malicious individuals could exploit facial images of users to assume their identities and infiltrate biometric authentication systems. Possible solutions to protect the facial identity of users are to: (1) extract anonymised facial features in users' local machines, namely action units (AU) of facial images, discard their facial images and send the AUs to the developer for processing, and (2) employ a federated learning approach i.e. process users' facial images in their local machines and only send their locally trained models back to the developer's machine for augmenting the final model. In this paper, we implement and compare the performance of these privacy-preserving strategies for affect recognition. Results on the popular RECOLA affective datasets show promising affect recognition performance in adopting a federated learning approach to protect users' identities, with Concordance Correlation Coefficient of 0.426 for valence and 0.390 for arousal.
In: Transposition: musique et sciences sociales, Band 6
ISSN: 2110-6134
The realm of the voice and the realm of the affective often share the distinction of the ineffable. Over the past 5-10 years, there has been a proliferation of scientific research and commercial products focused on the measurement of affect in the voice, attempting to codify and quantify that which previously had been understood as beyond language. Following similar work regarding the digital detection of facial expressions of emotion, this form of signal capture monitors data "below the surface," deriving information about the subject's intentions, objectives, or emotions by monitoring the voice signal for parameters such as timing, volume, pitch changes, and timbral fluctuation. Products claim to detect the mood, personality, truthfulness, confidence, mental health, and investability quotient of a speaker, based on the acoustic component of their voice. This software is being used in a range of applications, from targeted surveillance, mental health diagnoses, and benefits administration to credit management. A study of code, schematics, and patents reveals how this software imagines human subjectivity, and how such recognition is molded by, and in service of, the risk economy; revealing an evolution from truth-telling, to diagnostic, to predictive forms of listening.
In: Asian journal of research in social sciences and humanities: AJRSH, Band 6, Heft 4, S. 229
ISSN: 2249-7315
In: Telos: revista de estudios interdisciplinarios en ciencias sociales, Band 26, Heft 3, S. 843-860
ISSN: 2343-5763
We are currently witnessing the rise of a platform capitalism that bases a significant part of its economy on producing behavioral profiles to direct users' actions towards private ends. By associating radical behaviorism techniques with algorithmic data processing technologies, a force that Bernard Stiegler has identified as "psychopower" has intensified and consolidated. This article aims to demonstrate how this is achieved by deploying two control technologies in the architecture of digital platforms: Affective Computing and the Hook Model. Through an ethnographic study on the BeReal social network, we show how these two technologies first capture users' attention and create usage habits and second, promote the circulation of emotions so that these can be linked to specific contexts and datafied to develop behavioral profiles. Finally, we conduct a theoretical exercise to argue that both control technologies are key elements of a new power dispositif that we call "pulsional," which triggers an action in individuals that bypasses their conscious reflection, leading to detrimental consequences for the exercise of their freedom.
In: Healthcare Technologies
"This book focuses on the integration of emotions into artificial environments such as computers and robotics"--Provided by publisher
The research leading to these results has received funding from the European Union H2020 Horizon Programme (2014-2020) under grant agreement 952002, project PrismArch: Virtual reality aided design blending crossdisciplinary aspects of architecture in a multi-simulation environment. ; How does the form of our surroundings impact the ways we feel? This paper extends the body of research on the effects that space and light have on emotion by focusing on critical features of architectural form and illumination colors and their spatiotemporal impact on arousal. For that purpose, we solicited a corpus of spatial transitions in video form, lasting over 60 minutes, annotated by three participants in terms of arousal in a time-continuous and unbounded fashion. We process the annotation traces of that corpus in a relative fashion, focusing on the direction of arousal changes (increasing or decreasing) as affected by changes between consecutive rooms. Results show that properties of the form such as curved or complex spaces align highly with increased arousal. The analysis presented in this paper sheds some initial light in the relationship between arousal and core spatiotemporal features of form that is of particular importance for the affect-driven design of architectural spaces. ; peer-reviewed
BASE
Recent studies show that the elderly population has increased considerably in European society in recent years. This fact has led the European Union and many countries to propose new policies for caring services directed to this group. The current trend is to promote the care of the elderly in their own homes, thus avoiding inverting resources on residences. With this in mind, there are now new solutions in this direction, which try to make use of the continuous advances in computer science. This paper tries to advance in this area by proposing the use of a personal assistant to help older people at home while carrying out their daily activities. The proposed personal assistant is called ME3CA, and can be described as a cognitive assistant that offers users a personalised exercise plan for their rehabilitation. The system consists of a sensorisation platform along with decision-making algorithms paired with emotion detection models. ME3CA detects the users&rsquo ; emotions, which are used in the decision-making process allowing for more precise suggestions and an accurate (and unbiased) knowledge about the users&rsquo ; opinion towards each exercise.
BASE