Can Verbal Instructions Counteract Visual Context Effects in Web Surveys?
In: Public opinion quarterly: journal of the American Association for Public Opinion Research, Band 75, Heft 1, S. 1-1
ISSN: 0033-362X
41 Ergebnisse
Sortierung:
In: Public opinion quarterly: journal of the American Association for Public Opinion Research, Band 75, Heft 1, S. 1-1
ISSN: 0033-362X
In: The public opinion quarterly: POQ, Band 75, Heft 1, S. 1-18
ISSN: 1537-5331
Pictures used to supplement survey questions can systematically influence the answers obtained. Respondents react to the content of the image, giving higher-frequency reports when pictures of high-frequency events are shown and lower-frequency reports when pictures of low-frequency events are shown. The effects of pictures on responses are similar to those of verbal instructions (i.e., they produce an assimilation effect). Our results show that verbal and visual language both have independent effects as well as interact with each other. Verbal instructions have stronger effects than the visual effects produced by pictures, however, and can be used to counteract the visual context effects. We find that respondents pay more attention to verbal instruction when the verbal and visual cues are inconsistent with each other. This article provides evidence for a hierarchy of features that respondents attend to, with verbal language taking precedence over visual cues like pictures. Effective question writing, with verbal instructions making the question clear to respondents, reduces visual context effects. We found little evidence that conditions with pictures were evaluated better than conditions without pictures. Adapted from the source document.
In: Bulletin of sociological methodology: Bulletin de méthodologie sociologique : BMS, Band 147-148, Heft 1-2, S. 150-168
ISSN: 2070-2779
The relation between answer behaviour and measurement error has been studied extensively. Answer behaviour may be considered undesirable, like answering 'don't know' or 'won't tell'. It is not clear to what degree undesirable answer behaviour from the same respondents is present across different surveys. In this study, we investigated to what extent respondents show undesirable answer behaviours consistently over multiple surveys. First, we investigated to what extent the answer behaviours occurred in ten large general population surveys of CentERdata and Statistics Netherlands. Second, we explored the respondent variances and respondent-survey interaction variances to obtain an indication for respondent consistency for each answer behaviour. The results showed that respondents only occasionally give 'don't know'– and 'won't tell'-answers. An indication for respondent consistency was found for fast responding, slow responding, and 'won't tell'-answers in particular. We recommend follow-up research to investigate the relation between respondent characteristics and consistent answer behaviour.
In: Bulletin of sociological methodology: Bulletin de méthodologie sociologique : BMS, Band 142, Heft 1, S. 57-74
ISSN: 2070-2779
Studies of the processes underlying question answering in surveys suggest that the choice of (layout for) response categories can have a significant effect on respondent answers. In recent years, the use of pictures, such as emojis or stars, is often used in online communication. It is unclear if pictorial answer categories can replace traditional verbal formats as measurement instruments in surveys. In this article we investigate different versions of a Likert-scale to see if they generate similar results and user experiences. Data comes from the non-probability based Flitspanel in the Netherlands. The hearts and stars designs received lower average scores compared to the other formats. Smileys produced average answer scores in line with traditional radio buttons. Respondents evaluated the smiley design most positively. Grid designs were evaluated more negatively. People wanting to compare survey outcomes should be aware of these effects and only compare results when similar response formats are used.
In: International journal of social research methodology: IJSRM ; theory & practice, Band 22, Heft 5, S. 517-531
ISSN: 1464-5300
In: Survey research methods: SRM, Band 13, Heft 2, S. 195-213
ISSN: 1864-3361
The increasing use of smartphones opens up opportunities for novel ways of survey data collection, but also poses new challenges. Collecting more and different types of data means that studies can become increasingly intrusive. We risk over-asking participants, leading to nonresponse. This study documents nonresponse and nonresponse bias in a smartphone-only version of the Dutch Time Use Survey (TUS). Respondents from the Dutch LISS panel were asked to perform five sets of tasks to complete the whole TUS: 1) accept an invitation to participate in the study and install an app, 2) fill out a questionnaire on the web, 3) participate in the smartphone time use diary on their smartphone, 4) answer pop-up questions and 5) give permission to record sensor data (GPS locations and call data). Results show that 42.9% of invited panel members responded positively to the invitation to participate in a smartphone survey. However, only 28.9% of these willing panel members completed all stages of the study. Predictors of nonresponse are somewhat different at every stage. In addition, respondents who complete all smartphone tasks are different from groups who do not participate at some or any stage of the study. By using data collected in previous waves we show that nonresponse leads to nonresponse bias in estimates of time use. We conclude by discussing implications for using smartphone apps in survey research.
In: Journal of survey statistics and methodology: JSSAM, Band 6, Heft 3, S. 306-334
ISSN: 2325-0992
In: Public opinion quarterly: journal of the American Association for Public Opinion Research, Band 77, Heft 3, S. 783-782
ISSN: 0033-362X
In: Sociological methods and research, Band 40, Heft 1, S. 32-56
ISSN: 1552-8294
Over the past decades there has been an increasing use of panel surveys at the household or individual level. Panel data have important advantages compared to independent cross sections, but also two potential drawbacks: attrition bias and panel conditioning effects. Attrition bias arises if dropping out of the panel is correlated with a variable of interest. Panel conditioning arises if responses are influenced by participation in the previous wave(s); the experience of the previous interview(s) may affect the answers to questions on the same topic, such that these answers differ systematically from those of respondents interviewed for the first time. In this study the authors discuss how to disentangle attrition and panel conditioning effects and develop tests for panel conditioning allowing for nonrandom attrition. First, the authors consider a nonparametric approach with assumptions on the sample design only, leading to interval identification of the measures for the attrition and panel conditioning effects. Second, the authors introduce additional assumptions concerning the attrition process, which lead to point estimates and standard errors for both the attrition bias and the panel conditioning effect. The authors illustrate their method on a variety of repeated questions in two household panels. The authors find significant panel conditioning effects in knowledge questions, but not in other types of questions. The examples show that the bounds can be informative if the attrition rate is not too high. In most but not all of the examples, point estimates of the panel conditioning effect are similar for different additional assumptions on the attrition process.
In: Survey research methods: SRM, Band 3, Heft 2, S. 73-80
ISSN: 1864-3361
"Panel conditioning arises if respondents are influenced by participation in previous surveys, such that their answers differ from the answers of individuals who are interviewed for the first time. Having two panels - a trained one and a completely fresh one - created a unique opportunity for analyzing panel conditioning effects. To determine which type of question is sensitive to panel conditioning, 981 trained respondents and 2809 fresh respondents answered nine questions with different question types. The results in this paper show that panel conditioning mainly arises in knowledge questions. Answers to questions on attitudes, actual behavior, or facts were hardly sensitive to panel conditioning. The effect of panel conditioning in knowledge questions was bigger for questions where fewer respondents knew the answer and mainly associated with the number of times a respondent answered the exact same question before." (author's abstract)
In: The public opinion quarterly: POQ, Band 72, Heft 5, S. 985-1007
ISSN: 1537-5331
In: Public Opinion Quarterly, Band 72, Heft 5, S. 985-1007
SSRN
In: Public opinion quarterly: journal of the American Association for Public Opinion Research, Band 72, Heft 5, S. 985-1007
ISSN: 0033-362X
In: Social science computer review: SSCORE, Band 38, Heft 6, S. 720-738
ISSN: 1552-8286
Nonserious, inattentive, or careless respondents pose a threat to the validity of self-report research. The current study uses data from the Growth from Knowledge Online Panel in which respondents are representative of the Dutch population in education, gender, and age over 15 years ( N = 5,077). By doing regression analyses, we investigated whether self-reported seriousness and motivation are predictive of data quality, as measured using multiple indicators (i.e., nonsubstantial values, speeding, internal data consistency, nondifferentiation, response effects). Device group and demographic characteristics (i.e., education, gender, age) were also included in these analyses to see whether they predict data quality. Moreover, it was examined whether self-reported seriousness differed by device group and demographic characteristics. The results show that self-reported seriousness and motivation significantly predict multiple data quality indicators. Data quality seems similar for different device users, although smartphone users showed less speeding. Demographic characteristics explain little of the variance in data quality. Of those, education seems to be the most consistent predictor of data quality, where lower educated respondents show lower data quality. Effect sizes for all analyses were in the small to medium range. The present study shows that self-reported seriousness can be used in online attitude survey research to detect careless respondents. Future research should clarify the nature of this relationship, for example, regarding longer surveys and different wordings of seriousness checks.
In: Social science computer review: SSCORE, Band 34, Heft 5, S. 511-529
ISSN: 1552-8286
This article explores how individuals use online coping strategies after experiencing a negative life event. Many studies have shown that online coping is of rising importance. However, these studies have not provided all pieces of the puzzle because they tend to focus on one particular online venue (e.g., an online support group or social network site [SNS]) and on a limited number of coping strategies. This article aims to provide a more complete picture, by simultaneously examining multiple online and off-line coping strategies, using a survey administered to a representative sample of the 16+ population of the Netherlands. Furthermore, we analyze what kind of Internet activities are related to online coping and whether online coping is associated with well-being. Some 57% of our sample mentioned some form of online coping. Using the Internet for mental disengagement, active coping and planning were the most reported online coping strategies, whereas strategies aimed at emotional coping were reported less frequently. Online coping encompassed several activities: online gaming, which was associated with mental disengagement; searching for information, which was associated with problem-focused coping; and SNS and online support groups, which were associated with mental disengagement, problem-focused coping, and socioemotional coping. Finally, we examined the correlations between online coping and well-being. Controlling for off-line coping, we found online mental disengagement and online socioemotional coping to be inversely related to life satisfaction, self-esteem, and optimism, whereas correlations between online problem-focused coping and well-being were nonsignificant. The implications of these findings are discussed.