Special Issue on Disinformation, Hoaxes and Propaganda within Online Social Networks and Media
In: Online social networks and media: OSNEM, Band 23, S. 100132
ISSN: 2468-6964
11 Ergebnisse
Sortierung:
In: Online social networks and media: OSNEM, Band 23, S. 100132
ISSN: 2468-6964
In: Technical report, 2016.
New Psychoactive Substances (NPS) are drugs that lay in a grey area of legislation, since they are not internationally and officially banned, possibly leading to their not prosecutable trade. The exacerbation of the phenomenon is that NPS can be easily sold and bought online. This has also some effects over social media like forums and social networks, that are often used to discuss about new drugs and advertise them. This work introduces the techniques that enable the analysis of data coming from the Web, using scraping techniques and indexing of contents for a fast evaluation of analytics, useful for a broader comprehension of the phenomenon. Here we describe a broad list of tools suitable for scraping activity and illustrate how we collected data from forums and websites.
BASE
In: PNAS nexus, Band 3, Heft 5
ISSN: 2752-6542
Abstract
Echo chambers, i.e. clusters of users exposed to news and opinions in line with their previous beliefs, were observed in many online debates on social platforms. We propose a completely unbiased entropy-based method for detecting echo chambers. The method is completely agnostic to the nature of the data. In the Italian Twitter debate about the Covid-19 vaccination, we find a limited presence of users in echo chambers (about 0.35% of all users). Nevertheless, their impact on the formation of a common discourse is strong, as users in echo chambers are responsible for nearly a third of the retweets in the original dataset. Moreover, in the case study observed, echo chambers appear to be a receptacle for disinformative content.
In: Online social networks and media: OSNEM, Band 9, S. 1-16
ISSN: 2468-6964
In: Online social networks and media: OSNEM, Band 23, S. 100133
ISSN: 2468-6964
The COVID-19 pandemic has impacted on every human activity and, because of the urgency of finding the proper responses to such an unprecedented emergency, it generated a diffused societal debate. The online version of this discussion was not exempted by the presence of misinformation campaigns, but, differently from what already witnessed in other debates, the COVID-19 -intentional or not- flow of false information put at severe risk the public health, possibly reducing the efficacy of government countermeasures. In this manuscript, we study the effective impact of misinformation in the Italian societal debate on Twitter during the pandemic, focusing on the various discursive communities. In order to extract such communities, we start by focusing on verified users, i.e., accounts whose identity is officially certified by Twitter. We start by considering each couple of verified users and count how many unverified ones interacted with both of them via tweets or retweets: if this number is statically significant, i.e. so great that it cannot be explained only by their activity on the online social network, we can consider the two verified accounts as similar and put a link connecting them in a monopartite network of verified users. The discursive communities can then be found by running a community detection algorithm on this network. We observe that, despite being a mostly scientific subject, the COVID-19 discussion shows a clear division in what results to be different political groups. We filter the network of retweets from random noise and check the presence of messages displaying URLs. By using the well known browser extension NewsGuard, we assess the trustworthiness of the most recurrent news sites, among those tweeted by the political groups. The impact of low reputable posts reaches the 22.1% in the right and center-right wing community and its contribution is even stronger in absolute numbers, due to the activity of this group: 96% of all non reputable URLs shared by political groups come from this ...
BASE
In: Online social networks and media: OSNEM, Band 6, S. 41-57
ISSN: 2468-6964
New Psychoactive Substances (NPS) are drugs that lay in a grey area of legislation, since they are not internationally and officially banned, possibly leading to their not prosecutable trade. The exacerbation of the phenomenon is that NPS can be easily sold and bought online. Here, we consider large corpora of textual posts, published on online forums specialized on drug discussions, plus a small set of known substances and associated effects, which we call seeds. We propose a semi-supervised approach to knowledge extraction, applied to the detection of drugs (comprising NPS) and effects from the corpora under investigation. Based on the very small set of initial seeds, the work highlights how a contrastive approach and context deduction are effective in detecting substances and effects from the corpora. Our promising results, which feature a F1 score close to 0.9, pave the way for shortening the detection time of new psychoactive substances, once these are discussed and advertised on the Internet.
BASE
Fake followers?are those Twitter accounts specifically created to inflate the number of followers of a target account. Fake followers are dangerous for the social platform and beyond, since they may alter concepts like popularity and influence in the Twittersphere - hence impacting on economy, politics, and society. In this paper, we contribute along different dimensions. First, we review some of the most relevant existing features and rules (proposed by Academia and Media) for anomalous Twitter accounts detection. Second, we create a baseline dataset of verified human and fake follower accounts. Such baseline dataset is publicly available to the scientific community. Then, we exploit the baseline dataset to train a set of machine-learning classifiers built over the reviewed rules and features. Our results show that most of the rules proposed by Media provide unsatisfactory performance in revealing fake followers, while features proposed in the past by Academia for spam detection provide good results. Building on the most promising features, we revise the classifiers both in terms of reduction of overfitting and cost for gathering the data needed to compute the features. The final result is a novel?Class A?classifier, general enough to thwart overfitting, lightweight thanks to the usage of the less costly features, and still able to correctly classify more than 95% of the accounts of the original training set. We ultimately perform an information fusion-based sensitivity analysis, to assess the global sensitivity of each of the features employed by the classifier. The findings reported in this paper, other than being supported by a thorough experimental methodology and interesting on their own, also pave the way for further investigation on the novel issue of fake Twitter followers.
BASE
In: Technical report IIT TR-03/2014, 2014.
Fake followers are those Twitter accounts created to inflate the number of followers of a target account. Fake followers are dangerous to the social platform and beyond, since they may alter concepts like popularity and influence in the Twittersphere-hence impacting on economy, politics, and Society. In this paper, we contribute along different dimensions. First, we review some of the most relevant existing features and rules (proposed by Academia and Media) for anomalous Twitter accounts detection. Second, we create a gold standard of verified human and fake accounts. Then, we exploit the gold standard to train a set of machine-learning classifiers built over the reviewed rules and features. Most of the rules provided by Media provide unsatisfactory performance in revealing fake followers, while features provided by Academia for spam detection result in good performance. Building on the most promising features, we optimise the classifiers both in terms of reduction of overfitting and costs for gathering the data needed to compute the features. The final result is a "Class A" classifier, that is general enough to thwart overfitting and that uses the less costly features, while being able to correctly classify more than 95% of the accounts of the training set. The findings reported in this paper, other than being supported by a thorough experimental methodology and being interesting on their own, also pave the way for further investigation
BASE
In: Technical report, 2013.
Fake followers are those Twitter accounts created to inflate the number of followers of a target account. Fake followers are dangerous to the social platform and beyond, since they may alter concepts like popularity and influence in the Twittersphere-hence impacting on economy, politics, and Society. In this paper, we provide several contributions. First, we review the most relevant existing criteria (proposed by Academia and Media) for anomalous Twitter accounts detection, and later we assess their capability to detect fake followers. In particular, we contribute with the creation of a gold standard of verified human, as well as with a set of known fake accounts. We test the above cited criteria against these two data sets, showing that the analyzed mechanisms provide unsatisfactory performance in revealing fake followers. Moreover, building upon these results, we also introduce a novel taxonomy to discriminate fake followers from legitimate ones and spammers. The findings reported in this paper, other than being supported by a thorough experimental methodology and being interesting on their own, also pave the way for further investigation.
BASE