By the Numbers: Using Analytics.usa.gov to Learn about Traffic on Government Websites
In: Documents to the people: DTTP, Band 43, Heft 3
ISSN: 0091-2085, 0270-5095
10 Ergebnisse
Sortierung:
In: Documents to the people: DTTP, Band 43, Heft 3
ISSN: 0091-2085, 0270-5095
In: IASSIST quarterly: IQ, Band 44, Heft 1-2, S. 1-2
ISSN: 2331-4141
As guest editors, we are excited to publish this special double issue of IASSIST Quarterly. The topics of reproducibility, replicability, and transparency have been addressed in past issues of IASSIST Quarterly and at the IASSIST conference, but this double issue is entirely focused on these issues.
In recent years, efforts "to improve the credibility of science by advancing transparency, reproducibility, rigor, and ethics in research" have gained momentum in the social sciences (Center for Effective Global Action, 2020). While few question the spirit of the reproducibility and research transparency movement, it faces significant challenges because it goes against the grain of established practice.
We believe the data services community is in a unique position to help advance this movement given our data and technical expertise, training and consulting work, international scope, and established role in data management and preservation, and more. As evidence of the movement, several initiatives exist to support research reproducibility infrastructure and data preservation efforts:
Center for Open Science (COS) / Open Science Framework (OSF)[i]
Berkeley Initiative for Transparency in the Social Sciences (BITSS)[ii]
CUrating for REproducibility (CURE)[iii]
Project Tier[iv]
Data Curation Network[v]
UK Reproducibility Network[vi]
While many new initiatives have launched in recent years, prior to the now commonly used phrase "reproducibility crisis" and Ioannidis publishing the essay, "Why Most Published Research Findings are False," we know that the data services community was supporting reproducibility in a variety of ways (e.g., data management, data preservation, metadata standards) in wellestablished consortiums such as Inter-university Consortium for Political and Social Research (ICPSR) (Ioannidis, 2005).
The articles in this issue comprise several very important aspects of reproducible research:
Identification of barriers to reproducibility and solutions to such barriers
Evidence synthesis as related to transparent reporting and reproducibility
Reflection on how information professionals, researchers, and librarians perceive the reproducibility crisis and how they can partner to help solve it.
The issue begins with "Reproducibility literature analysis" which looks at existing resources and literature to identify barriers to reproducibility and potential solutions. The authors have compiled a comprehensive list of resources with annotations that include definitions of key concepts pertinent to the reproducibility crisis.
The next article addresses data reuse from the perspective of a large research university. The authors examine instances of both successful and failed data reuse instances and identify best practices for librarians interested in conducting research involving the common forms of data collected in an academic library.
Systematic reviews are a research approach that involves the quantitative and/or qualitative synthesis of data collected through a comprehensive literature review. "Methods reporting that supports reader confidence for systematic reviews in psychology" looks at the reproducibility of electronic literature searches reported in psychology systematic reviews.
A fundamental challenge in reproducing or replicating computational results is the need for researchers to make available the code used in producing these results. But sharing code and having it to run correctly for another user can present significant technical challenges. In "Reproducibility, preservation, and access to research with Reprozip, Reproserver" the authors describe open source software that they are developing to address these challenges.
Taking a published article and attempting to reproduce the results, is an exercise that is sometimes used in academic courses to highlight the inherent difficulty of the process. The final article in this issue, "ReprohackNL 2019: How libraries can promote research reproducibility through community engagement" describes an innovative library-based variation to this exercise.
Harrison Dekker, Data Librarian, University of Rhode Island
Amy Riegelman, Social Sciences Librarian, University of Minnesota
References
Center for Effective Global Action (2020), About the Berkeley Initiative for Transparency in the Social Sciences. Available at: https://www.bitss.org/about (accessed 23 June 2020).
Ioannidis, J.P. (2005) 'Why most published research findings are false', PLoS Medicine, 2(8), p. e124. doi: https://doi.org/10.1371/journal.pmed.0020124
[i] https://osf.io
[ii] https://www.bitss.org/
[iii] http://cure.web.unc.edu
[iv] https://www.projecttier.org/
[v] https://datacurationnetwork.org/
[vi] https://ukrn.org
In: Documents to the people: DttP, Band 46, Heft 3, S. 20-27
The reproducibility of scientific studies has recently come under increased scrutiny in both the popular and scientific press. Studies from various disciplines (e.g., psychology, health sciences) have revealed failures to reproduce and replicate research. This has led to declarations that science is experiencing a "reproducibility crisis" and that this crisis has negative consequences for science, the public, and public policy. Two of the authors have previously published on reproducibility and the services and expertise librarians and libraries offer that make the library community a key part of supporting reproducible research, and we direct you to these articles for more information on this broader topic.
In: Media and Communication, Band 9, Heft 4, S. 120-133
Through various online activities, individuals produce large amounts of data that are collected by companies for the purpose of providing users with personalized communication. In the light of this mass collection of personal data, the transparency and control paradigm for personalized communication has led to increased attention from legislators and academics. However, in the scientific literature no clear definition of personalization transparency and control exists, which could lead to reliability and validity issues, impeding knowledge accumulation in academic research. In a literature review, we analyzed 31 articles and observed that: 1) no clear definitions of personalization transparency or control exist; 2) they are used interchangeably in the literature; 3) collection, processing, and sharing of data are the three objects of transparency and control; and 4) increased transparency does not automatically increase control because first awareness needs to be raised in the individual. Also, the relationship between awareness and control depends on the ability and the desire to control. This study contributes to the field of algorithmic communication by creating a common understanding of the transparency and control paradigm and thus improves validity of the results. Further, it progresses research on the issue by synthesizing existing studies on the topic, presenting the transparency-awareness-control framework, and formulating propositions to guide future research.
In: Journal of vocational behavior, Band 118, S. 103377
ISSN: 1095-9084
In: Analyses of social issues and public policy
ISSN: 1530-2415
AbstractThe term Islamophobia is used in research studies; however, it is evident many researchers do not similarly use the term and, subsequently, measure the construct. We evaluate measures based upon their alignment with one first definition of Islamophobia that includes: (1) a perceived fear or threat of Islam/Muslims and (2) an engagement in prejudicial attitudes and/or discriminatory actions. We conducted a systematic literature search of 15 databases to identify Islamophobia‐related measures used in the literature from 1992 to 2018 (updated 2022). The measures were reviewed to examine alignment with the definition of Islamophobia and their psychometric properties. We identified 12 validated measures of Islamophobia and provided an in‐depth review of each measure. Additionally, we cataloged the 249 validated and nonvalidated measures of Islamophobia (N = 24), and the five remaining content areas—prejudicial attitudes (N = 80), discriminatory actions (N = 21), fear of Muslims (N = 23), anti‐other group (N = 52), and experiences of discrimination for Muslims (N = 49) by validity, measure structure, and other criteria (Tables 1–12). This systematic review can assist researchers in identifying and selecting the most reliable and valid measure related to their definition of Islamophobia.
In: Children and youth services review: an international multidisciplinary review of the welfare of young people, Band 136, S. 106425
ISSN: 0190-7409
In: Cultural diversity and ethnic minority psychology, Band 29, Heft 3, S. 358-371
ISSN: 1939-0106
In: SSM - Mental health, Band 2, S. 100139
ISSN: 2666-5603
In: SSM - Mental health, Band 1, S. 100029
ISSN: 2666-5603