Eye tracking is now a common technique studying the moment-by-moment cognition of those processing visual information. Yet this technique has rarely been applied to different survey modes. Our paper uses an innovative method of real-world eye tracking to look at attention to sensitive questions and response scale points, in Web, face-to-face and paper-and-pencil self-administered (SAQ) modes. We link gaze duration to responses in order to understand how respondents arrive at socially desirable or undesirable answers. Our novel technique sheds light on how social desirability biases arise from deliberate misreporting and/or satisficing, and how these vary across modes.
In: Political analysis: PA ; the official journal of the Society for Political Methodology and the Political Methodology Section of the American Political Science Association, Band 30, Heft 4, S. 535-549
AbstractHow can we elicit honest responses in surveys? Conjoint analysis has become a popular tool to address social desirability bias (SDB), or systematic survey misreporting on sensitive topics. However, there has been no direct evidence showing its suitability for this purpose. We propose a novel experimental design to identify conjoint analysis's ability to mitigate SDB. Specifically, we compare a standard, fully randomized conjoint design against a partially randomized design where only the sensitive attribute is varied between the two profiles in each task. We also include a control condition to remove confounding due to the increased attention to the varying attribute under the partially randomized design. We implement this empirical strategy in two studies on attitudes about environmental conservation and preferences about congressional candidates. In both studies, our estimates indicate that the fully randomized conjoint design could reduce SDB for the average marginal component effect (AMCE) of the sensitive attribute by about two-thirds of the AMCE itself. Although encouraging, we caution that our results are exploratory and exhibit some sensitivity to alternative model specifications, suggesting the need for additional confirmatory evidence based on the proposed design.
AbstractPartisanship is a stable trait but expressions of partisan preferences can vary according to social context. When particular preferences become socially undesirable, some individuals refrain from expressing them in public, even in relatively anonymous settings such as surveys and polls. In this study, we rely on the psychological trait of self-monitoring to show that Americans who are more likely to adjust their behaviors to comply with social norms (i.e. high self-monitors) were less likely to express support for Donald Trump during the 2016 Presidential Election. In turn, as self-monitoring decreases, we find that the tendency to express support for Trump increases. This study suggests that – at least for some individuals – there may have been a tendency in 2016 to repress expressed support for Donald Trump in order to mask socially undesirable attitudes.
A considerable amount of research has examined the extent to which members of dominant cultures perceive minority groups as threatening their way of life. While various instruments measure these perceptions of threat, few researchers have empirically analysed the statistical properties of these scales. Specifically, studies have not adequately explored social desirability of threat scales. The current study investigates the extent to which one set of threat scales is internally consistent or reliable (González et al., 2008), and explores social desirability within the González et al. (2008) integrated threat instruments by comparing self-reports to other reports (intimate other and friend reports). Results indicate the instruments are internally consistent and that self-reports and other reports of threat do not differ on most indices of threat.
Drawing on research from a mixed-methods project on gaming we argue for a qualitative methodological approach called "interactive elicitation," a form of data collection that combines elements of photo elicitation, interviewing and vignettes. After situating our broader research project exploring young people's experiences of violent open-world video games, we outline the process of conducting interactive elicitation, arguing for a mixed-methods approach where participants are observed and interviewed both during and immediately after interacting with particular cultural artefacts, in this case the game GTA V. We reflect on the initial design of the research methodology, the problematic aspects of conducting the research – focusing on social desirability bias – before proffering adaptations to our approach in relation to complementary work in the field of Game Studies. Ultimately, we argue for immediacy in relation to research on cultural experiences and the importance of social desirability as an asset in framing interaction, both of which have implications for sociological and interdisciplinary research more widely.
Prior public opinion research has identified a wide range of circumstances in which polling results may be tainted by social desirability bias. In races pitting a Black candidate against White opponents, this has often been referred to as the "Bradley effect" (aka "Wilder effect" or "Dinkins effect"), by which survey respondents overstate their preference for Black candidates running against White opponents. This study examines the accuracy of polling on same-sex marriage ballot measures relative to polling on other statewide ballot issues in all states voting on the issue from 1998 to 2012, controlling for a range of theoretically relevant contextual factors. There has been a great deal of speculation, though little empirical evidence, that polling systematically understates opposition to same-sex marriage. Consistent with social desirability bias, this study finds that opposition to same-sex marriage is about 5% to 7% greater on election day than in preelection polls.
Much of what we know about public service motivation comes from self-report measures. However, self-report questionnaires are vulnerable to social desirability bias due to respondents' tendencies to answer in a more socially acceptable way. This is a problem as social desirability bias threatens the validity of the measure. This study investigates whether characteristics of national culture influence social desirability bias during surveys on public service motivation. In particular, the impact of social desirability bias is analyzed with two concerns in mind: construct validity and inference validity of public service motivation measures. Experimental survey research (list experiment) is conducted to examine the magnitude of social desirability bias and its associations with national cultures in four countries: Japan, Korea, the Netherlands, and the United States. The results show that respondents in both collectivistic countries (Japan and Korea) and individualistic countries (the Netherlands and the United States) are likely to over-report answers on items of public service motivation, although the magnitude and pattern of this bias is stronger and more consistent in collectivistic countries. This study also finds a strong possibility of a moderator effect in correlational analyses in collectivistic countries, but it is doubtful this effect is present in individualistic countries. Overall, we suggest that the effects of social desirability bias should be investigated in public service motivation research, and social desirability bias should be controlled for in future research.
AbstractA key challenge in survey research is social desirability bias: respondents feel pressured to report acceptable attitudes and behaviors. Building on established findings, we argue that threat‐inducing violent events are a heretofore unaccounted for driver of social desirability bias. We probe this argument by investigating whether fatal terror attacks lead respondents to overreport past electoral participation, a well‐known and measurable result of social desirability bias. Using a cross‐national analysis and natural and survey experiments, we show that fatal terror attacks generate turnout overreporting. This highlights that threat‐inducing violent events induce social desirability, that researchers need to account for the timing of survey fieldwork vis‐à‐vis such events, and that some of the previously reported post‐violent conflict increases in political participation may be more apparent than real.
Qualitative studies of vote buying find the practice to be common in many Latin American countries, but quantitative studies using surveys find little evidence of vote buying. Social desirability bias can account for this discrepancy. We employ a survey-based list experiment to minimize the problem. After the 2008 Nicaraguan municipal elections, we asked about vote-buying behavior by campaigns using a list experiment and the questions traditionally used by studies of vote buying on a nationally representative survey. Our list experiment estimated that 24% of registered voters in Nicaragua were offered a gift or service in exchange for votes, whereas only 2% reported the behavior when asked directly. This detected social desirability bias is nonrandom and analysis based on traditional obtrusive measures of vote buying is unreliable. We also provide systematic evidence that shows the importance of monitoring strategies by parties in determining who is targeted for vote buying. Adapted from the source document.
Direct estimates based on election returns show that corruption is mildly punished at the polls. A large majority of survey respondents, however, often tend to state that they do not like corruption and will not support corrupt politicians. This has been interpreted as a product of social desirability bias: interviewees prefer to report socially accepted attitudes (rejection of corruption) instead of truthful responses (intention to vote for their preferred candidates regardless of malfeasance). We test to what extent this is the case by using a list experiment that allows interviewees to be questioned in an unobtrusive way, removing the possible effects of social desirability. Our results show that the great majority of respondents report intentions to electorally punish allegedly corrupt candidates even when asked in an unobtrusive way. We discuss the implications of this finding for the limited electoral accountability of corruption.