Corrigendum to: How Accurately do Different Evaluation Methods Predict the Reliability of Survey Questions?
In: Journal of survey statistics and methodology: JSSAM, Band 8, Heft 5, S. 1018-1020
ISSN: 2325-0992
14 Ergebnisse
Sortierung:
In: Journal of survey statistics and methodology: JSSAM, Band 8, Heft 5, S. 1018-1020
ISSN: 2325-0992
In: Journal of survey statistics and methodology: JSSAM, Band 6, Heft 4, S. 465-490
ISSN: 2325-0992
In: Journal of survey statistics and methodology: JSSAM, Band 4, Heft 3, S. 362-381
ISSN: 2325-0992
A diverse range of evaluation methods is available for detecting problems with survey questions. Ex-ante methods are relatively inexpensive because they do not require data collection. Other methods require data collection in either the laboratory or the field. This study evaluates the extent to which four ex-ante methods (the Question Understanding Aid [QUAID], the Survey Quality Profile [SQP], the Questionnaire Appraisal System [QAS], and expert review), one laboratory method (cognitive interviews), and two field methods (behavior coding and measurement of response latency) predict the reliability of answers to questions. The findings suggest (1) a multimethod approach to evaluation is appropriate given differences between the methods in their prediction of reliability and (2) a combination of a subset of the methods is most predictive of reliability: QAS, QUAID, SQP, cognitive interviewing, and expert review worked as well as the seven methods together.
In: The public opinion quarterly: POQ, Band 82, Heft 1, S. 34-62
ISSN: 1537-5331
In: The public opinion quarterly: POQ, Band 80, Heft 3, S. 741-760
ISSN: 1537-5331
In: NBER Working Paper No. t0328
SSRN
In: The public opinion quarterly: POQ, Band 70, Heft 5, S. 676-703
ISSN: 1537-5331
In: Public opinion quarterly: journal of the American Association for Public Opinion Research, Band 70, Heft 5, S. 676-703
ISSN: 0033-362X
In: Wiley series in probability and statistics
In: Wiley Series in Survey Methodology Ser. v.567
In: Wiley Series in Survey Methodology
Insightful observations on common question evaluation methods and best practices for data collection in survey research Featuring contributions from leading researchers and academicians in the field of survey research, Question Evaluation Methods: Contributing to the Science of Data Quality sheds light on question response error and introduces an interdisciplinary, cross-method approach that is essential for advancing knowledge about data quality and ensuring the credibility of conclusions drawn from surveys and censuses. Offering a variety of expert analyses of question evaluation methods, the book provides recommendations and best practices for researchers working with data in the health and social sciences. Based on a workshop held at the National Center for Health Statistics (NCHS), this book presents and compares various question evaluation methods that are used in modern-day data collection and analysis. Each section includes an introduction to a method by a leading authority in the field, followed by responses from other experts that outline related strengths, weaknesses, and underlying assumptions. Topics covered include: Behavior coding Cognitive interviewing Item response theory Latent class analysis Split-sample experiments Multitrait-multimethod experiments Field-based data methods A concluding discussion identifies common themes across the presented material and their relevance to the future of survey methods, data analysis, and the production of Federal statistics. Together, the methods presented in this book offer researchers various scientific approaches to evaluating survey quality to ensure that the responses to these questions result in reliable, high-quality data. Question Evaluation Methods is a valuable supplement for courses on questionnaire design, survey methods, and evaluation methods
In: Social science computer review: SSCORE, Band 36, Heft 5, S. 542-556
ISSN: 1552-8286
Does completing a web survey on a smartphone or tablet computer reduce the quality of the data obtained compared to completing the survey on a laptop computer? This is an important question, since a growing proportion of web surveys are done on smartphones and tablets. Several earlier studies have attempted to gauge the effects of the switch from personal computers to mobile devices on data quality. We carried out a field experiment in eight counties around the United States that compared responses obtained by smartphones, tablets, and laptop computers. We examined a range of data quality measures including completion times, rates of missing data, straightlining, and the reliability and validity of scale responses. A unique feature of our study design is that it minimized selection effects; we provided the randomly determined device on which respondents completed the survey after they agreed to take part. As a result, respondents may have been using a device (e.g., a smartphone) for the first time. However, like many of the prior studies examining mobile devices, we find few effects of the type of device on data quality.
In: The public opinion quarterly: POQ, Band 66, Heft 4, S. 594-607
ISSN: 1537-5331
In: Public opinion quarterly: journal of the American Association for Public Opinion Research, Band 66, Heft 4, S. 594-607
ISSN: 0033-362X
The purpose of the current research is to provide additional empirical findings directly related to monthly or seasonal variations in sample efficiency & response rates in telephone interviews. Second, we have expanded the analysis to include sample demographics within the data set to ascertain whether these items show variation across months. To test the hypothesis that there are monthly or seasonal variations in sample efficiency, we analyzed 2 years of Iowa Behavioral Risk Factor Surveillance Survey (Iowa BRFSS) data & 1 year of national data from the BRFSS. Time frames considered in the analyses were month, quarter, & season (summer/non-summer); variables examined included number of call attempts, contact rates, & cooperation rates. 4 Tables, 17 References. Adapted from the source document.
In: Public opinion quarterly: journal of the American Association for Public Opinion Research, Band 66, Heft 4, S. 594-607
ISSN: 0033-362X
INTRODUCTION: Emerging tobacco products have become increasingly popular, and the US Food and Drug Administration extended its authority to all products meeting the definition of a tobacco product in 2016. These changes may lead to shifts in public perceptions about tobacco products and regulation, and national surveys are attempting to assess these perceptions at the population level. This article describes the item development and cognitive interviewing of the tobacco product and regulation perception items included in two tobacco-focused cycles of the Health Information National Trends Survey (HINTS-FDA), referred to as HINTS-FDA. METHODS: Cognitive interviewing was used to investigate how respondents comprehended and responded to tobacco product and regulation perception items. Adult participants (n = 20) were selected purposively to oversample current tobacco users and were interviewed in two iterative rounds. Weighted descriptive statistics from the fielded HINTS-FDA surveys (N = 5474) were also calculated. RESULTS: Items were generally interpreted as intended, and participants meaningfully discriminated between tobacco products when assessing addiction perceptions. Response selection issues involved inconsistent reporting among participants with little knowledge or ambivalent opinions about either government regulation or tobacco products and ingredients, which resolved when a "don't know" response option was included in the survey. The fielded survey found that a non-negligible proportion of the population do not have clear perceptions of emerging tobacco products or government regulation. CONCLUSIONS: A "don't know" response option is helpful for items assessing many emerging tobacco products but presents several analytic challenges that should be carefully considered. Multiple items assessing specific tobacco product and regulation perception items are warranted in future surveys. IMPLICATIONS: The findings from this study can serve as a foundation for future surveys that assess constructs related to ...
BASE