Conspiracy theories on social media have been suspected of contributing to mobilization and radicalization. Yet, few studies have examined the prevalence of psychological variables that may serve to motivate normative and non-normative collective action in this material. Drawing from the "social identity model of collective action", the current study uses a mixed-methods approach to examine the prevalence of collective action cues in conspiracy theory-endorsing social media spaces. Towards this end, I examined four German Facebook groups (Covid-19-Skeptic, Far-Right, Chemtrail, and Political Affairs) during the first months of the Covid-19 pandemic. The results of qualitative content analysis (N = 828 posts), a hierarchical cluster analysis, and the examination of popularity cues showed that: (a) collective action cues were frequent; (b) most posts transmitted alternative views (Cluster 1) or absolutist ideologies (Cluster 2) with few collective action cues - yet, more than one-third of the posts were either mobilizing (Cluster 3) or wrathful (Cluster 4), entailing multiple collective action cues including cues theoretically linked to non-normative action; (c) mobilizing and wrathful posts were more engaging than alternative views and absolutist ideologies; (d) the types of posts and levels of engagement varied between the examined groups such that the Chemtrail and the Far-Right group disseminated more content with a higher mobilizing potential. The Far-Right group was also the most active in responding to its members. The results of this study are novel in that they demonstrate the prevalence of cues that have been linked to non-normative collective action in psychological research within conspiracy theory-endorsing Facebook groups.
In: New media & society: an international and interdisciplinary forum for the examination of the social dynamics of media and information change, Volume 23, Issue 3, p. 554-577
Eudaimonic entertainment, which motivates a reflection on topics such as virtue or meaning, has many benefits, such as fostering wellbeing and inspiring prosocial behavior. Yet, it may also have a darker side when Islamic extremists use accordant elements in online propaganda. So far, this "dark inspiration" has attracted little scholarly interest. The current article fills this gap via a mixed-methods case study of an Islamic extremist influencer on Instagram. The study combined a qualitative content analysis of the account's postings from 2016 to 2018 ( n = 301 posts), with a hierarchical cluster analysis and digital data on aggregated user response to these posts. I found four types of post, ranging from calls for conservativism to calls for violent jihad. Different eudaimonic cues were used in all four types. Likes and comments varied as a function of type, with the violence promoting posts motivating the largest number of user responses.
The synopsis of this cumulative dissertation reports the theoretical background, methodology and main results of five studies addressing the role of intergroup versus interpersonal similarities for mediated social encounters under conditions of mortality salience (MS). Drawing upon terror management theory (TMT, Greenberg, Pyszczynski, & Solomon, 1986) individuals were expected to prefer similar over dissimilar others under conditions of MS. In theory, similarity can take place on the intergroup level (i.e. by belonging to the same in-group) as well as on the interpersonal level (e.g., by holding the same attitudes). So far, the relative relevance of intergroup versus interpersonal similarity has not been studied systematically. Particularly in mediated social encounters, intergroup and interpersonal similarity can be independent from each other and might have different effects. The results of five studies in different contexts confirmed intergroup and interpersonal similarities to have different effects in mediated encounters under conditions of MS. In an online dating context, a similarity-attraction effect emerged only among in-group but not out-group members (Study 1), and intergroup but not interpersonal dissimilarity threatened the individuals' defense against MS (Study 2). In a gaming context, individuals preferred an interpersonally similar in-group (versus out-group) avatar (Study 4) but showed no in-group bias when the avatar was interpersonally dissimilar (Study 3). Further, the valence of the in-group played a role under conditions of interpersonal dissimilarity (Study 3), but not under conditions of interpersonal similarity (Study 4). Finally, Study 5 found an increased interest in media content by in-group but not out-group members under conditions of MS even when the content (extremist propaganda) was negatively valenced and did not match the recipients' political attitude. The results are discussed regarding their implications.
This editorial introduces the thematic issue on inspirational media; including its role in the elicitation of meaning and self-transcendence, audience responses to inspirational narratives, and the potential for inspirational media to be used for manipulative purposes. We first set the stage for the thematic issue by describing an organizing framework by Thrash and Elliot (2003) to study inspiration. We then situate the seven articles published in this thematic issue along the logic of different components of this framework, namely media content capable of invoking transcendence through emotions and excitatory responses, and a motivational impulse to act upon the ideas acquired from content. This thematic issue thereby highlights unique perspectives for understanding media's ability to serve as the source of inspiration - be it for social benefit or detriment. Finally, we consider directions for future research on inspirational media.
Right-wing extremists and Islamist extremists try to recruit new followers by addressing their national (for instance, German) or religious (Muslim) social identity via online propaganda videos. Two studies examined whether capitalizing on a shared group-membership affects the emotional and cognitive response towards extremist propaganda. In both studies, Germans/non-migrants, Muslim migrants and control participants ( N = 235) were confronted with right-wing extremist and Islamist extremist videos. Emotional and cognitive effects of students (Study 1) and apprentices (Study 2) were assessed. Results showed a general negative evaluation of extremist videos. More relevant, in-group propaganda led to more emotional costs in both studies. Yet, the responses varied depending on educational level: students reported more negative emotions and cognitions after in-group directed videos, while apprentices reported more positive emotions and cognitions after in-group directed propaganda. Results are discussed considering negative social identities.
Consuming conspiracy theories erodes trust in democratic institutions, while conspiracy beliefs demotivate democratic participation, posing a potential threat to democracy. The proliferation of social media, especially the emergence of numerous alternative platforms with minimal moderation, has greatly facilitated the dissemination and consumption of conspiracy theories. Nevertheless, there remains a dearth of knowledge concerning the origin and evolution of specific conspiracy theories across different platforms. This study aims to address this gap through a large-scale, cross-platform examination of the genesis of new conspiracy theories surrounding the death of Jeffrey Epstein. Through a (semi-) automated content analysis conducted on a distinctive dataset comprising N = 8,020,314 Epstein-related posts posted on both established platforms ( Twitter, Reddit) and alternative platforms ( Gab and 4Chan), we demonstrate that conspiracy theories emerge early and influence public discourse well in advance of reports from established media sources. Our data shows that users of the studied platforms immediately turn to conspirational explanations, exhibiting skepticism towards the official representation of events. Especially on alternative platforms, this skepticism swiftly transformed into unwarranted conspiracy theorizing, partly bolstered by references to alternative news media sources. The present study shows how conspirational explanations thrive in low information environments and how alternative media plays a role in turning rational skepticism into unwarranted conspiracy theories.
Planning and designing a study that links content analysis and panel data is a complex endeavor and managing a collaborative research project across multiple organizations involves many hurdles and challenges. Both, the complexity of a linkage design and the challenging nature of a collaborative research project are enhanced by the COVID-19 pandemic, demanding creative solutions for many issues and a lot of planning by the researchers involved. Especially the challenges involved in gathering, storing, analyzing, and accessing data are amplified by the lack of face-to-face contact and standardized technical infrastructure for digital collaborative research projects. This article aims at giving an overview of the technical infrastructure involved in the content analysis part of a large-scale linkage study, providing researchers with a blueprint of the many moving parts involved in the study's implementation. This overview will be discussed with a reflective eye toward the challenges encountered and solutions found in the process, the added layer of complexity of a global pandemic, and potential learnings for future projects like this.
Online media offer unprecedented access to digital public spheres, largely enhancing users' opportunities for participation and providing new means for strengthening democratic discourse. At the same time, the last decades have demonstrated that online discourses are often characterised by so-called 'dark participation' the spreading of lies and incivility. Using 'problematic behaviour theory' as framework and focusing on incivility as a specific form of dark participation, this article investigates the role of users' personal characteristics, media use, and online experiences in relation to offensive and hateful online behaviour. Using a random-quota survey of the German population, we explored how dark personality traits, political attitudes and emotions, the frequency and spaces of online-media use, and users' experiences with both civil and uncivil online discourses predicted participants own uncivil behaviour, such as posting, sharing, or liking uncivil content. We found that 46% of the participants who had witnessed incivility in the last three months also engaged in uncivil participation. A hierarchical logistic regression analysis showed that incivility was associated with manipulative personality traits as measured by the dark triad, right-wing populist voting intentions, and frequent social-media use. Experiences with both civil comments and hate speech predicted higher levels of uncivil participation. The strongest predictor was participants' personal experiences with online victimisation. Overall, the results confirmed that dark participation in the sense of uncivil engagement results from the interplay of personality traits, an online environment that allows for deviant engagement, and, most importantly, participants' experiences in said environment.
Online media offer unprecedented access to digital public spheres, largely enhancing users' opportunities for participation and providing new means for strengthening democratic discourse. At the same time, the last decades have demonstrated that online discourses are often characterised by so-called 'dark participation' the spreading of lies and incivility. Using 'problematic behaviour theory' as framework and focusing on incivility as a specific form of dark participation, this article investigates the role of users' personal characteristics, media use, and online experiences in relation to offensive and hateful online behaviour. Using a random-quota survey of the German population, we explored how dark personality traits, political attitudes and emotions, the frequency and spaces of online-media use, and users' experiences with both civil and uncivil online discourses predicted participants own uncivil behaviour, such as posting, sharing, or liking uncivil content. We found that 46% of the participants who had witnessed incivility in the last three months also engaged in uncivil participation. A hierarchical logistic regression analysis showed that incivility was associated with manipulative personality traits as measured by the dark triad, right-wing populist voting intentions, and frequent social-media use. Experiences with both civil comments and hate speech predicted higher levels of uncivil participation. The strongest predictor was participants' personal experiences with online victimisation. Overall, the results confirmed that dark participation in the sense of uncivil engagement results from the interplay of personality traits, an online environment that allows for deviant engagement, and, most importantly, participants' experiences in said environment.
Participatory formats in online journalism offer increased options for user comments to reach a mass audience, also enabling the spreading of incivility. As a result, journalists feel the need to moderate offensive user comments in order to prevent the derailment of discussion threads. However, little is known about the principles on which forum moderation is based. The current study aims to fill this void by examining 673,361 user comments (including all incoming and rejected comments) of the largest newspaper forum in Germany (Spiegel Online) in terms of the moderation decision, the topic addressed, and the use of insulting language using automated content analysis. The analyses revealed that the deletion of user comments is a frequently used moderation strategy. Overall, more than one-third of comments studied were rejected. Further, users mostly engaged with political topics. The usage of swear words was not a reason to block a comment, except when offenses were used in connection with politically sensitive topics. We discuss the results in light of the necessity for journalists to establish consistent and transparent moderation strategies.
Participatory formats in online journalism offer increased options for user comments to reach a mass audience, also enabling the spreading of incivility. As a result, journalists feel the need to moderate offensive user comments in order to prevent the derailment of discussion threads. However, little is known about the principles on which forum moderation is based. The current study aims to fill this void by examining 673,361 user comments (including all incoming and rejected comments) of the largest newspaper forum in Germany (Spiegel Online) in terms of the moderation decision, the topic addressed, and the use of insulting language using automated content analysis. The analyses revealed that the deletion of user comments is a frequently used moderation strategy. Overall, more than one-third of comments studied were rejected. Further, users mostly engaged with political topics. The usage of swear words was not a reason to block a comment, except when offenses were used in connection with politically sensitive topics. We discuss the results in light of the necessity for journalists to establish consistent and transparent moderation strategies.