33 interviewers & 2,600 R's provide data for a study of causes of differing levels of interviewer variance, based on an original study involving 41 interviewers & 3,000 households in a city in Calif of about 100,000 population. Age, sex, SES, SM, & interviewer ratings are taken as independent variables. Variance differs among schedule items & by interviewer & R age & sex. Results indicate that interviewing is a more complex process than previously supposed, & interviewer selection requires considerable thought. Random assignment is probably optimal until interviewing is better understood. 2 Tables. Modified HA.
"Data used in nationwide face-to-face surveys are almost always collected in multistage cluster samples. The relative homogeneity of the clusters selected in this way can lead to design effects at the sampling stage. Interviewers can further homogenize answers within the small geographic clusters that form the sampling points. The study presented here was designed to distinguish between interviewer effects and sampling-point effects using interpenetrated samples for conducting a nationwide survey on fear of crime. Even though one might, given the homogeneity of neighborhoods, assume that sampling-point effects would be especially strong for questions related to fear of crime in one's neighborhood, we found that, for most items, the interviewer was responsible for a greater share of the homogenizing effect than was the spatial clustering. This result can be understood if we recognize that these questions are part of a larger class of survey questions whose subject matter is either unfamiliar to the respondent or otherwise not well anchored in the mind of the respondent. These questions permit differing interpretations to be elicited by the interviewer." (author's abstract)
"Refusals are a significant source of non-response in surveys. During field periods of some surveys reasons for refusals are collected in call record data (as part of para-data). This article presents a study employing a content analysis of open-ended comments on reasons for refusals collected by interviewers in a survey of the German population (ALLBUS). We analysed the reasons for refusals contained in these comments, as well as to what extent these comments include information about factors relevant to participation in surveys. Additionally, we analysed the impact of interviewer characteristics – gender, age, education and experience – on data collection using various multilevel multinomial models. The results show that interviewer comments provide typical reasons for refusals, as well as specific information about target persons, their environment and the survey process. Interviewers' age and education influenced the collection of reasons for refusals. At the same time interviewer variances (obtained through multinomial multilevel models) were very high, showing that interviewers prefer to report certain reasons for refusals. The highest interviewer level variances were obtained for providing no comments at all. To improve data quality and reduce high interviewer impact, we suggest using improved standardised instruments to collect reasons for refusals. Codings based on a categorisation scheme which we developed for our content analysis show high reliability (kappa = .81). Thus, this scheme can be used as a basis for developing such standardised instruments." (author's abstract)
A description & illustration of 2 statist techniques for measuring interviewer & question objectivity, & to compare (sigma) 2 found within (total sum) pop with that found among subgroups, where information gathered by a particular interviewer may be considered as a subgroup. Another statist method measuring interviewer objectivity is to set up an analysis of (sigma) 2 table so that contributions to a of subgrouping & of diff's within subgroups may be compared. This method, however, demands replicated interviewer assignments, & practical considerations usually prevent such a procedure. Illustrations of both methods are given involving the use of nonpsychol'al material. H. H. Smythe.
This article investigates how different strategies used by interviewers when recording interviewer observations relate to observation accuracy. Before conducting interviews in a refreshment sample of the general population for the German PASS panel study, interviewers were asked to observe one key target variable of the study -- whether a household is at risk of poverty or not -- for all sampled households. In addition, interviewers recorded what strategies they had used to make their observations. For responding households, we assessed the accuracy of the observation by comparing it to an actual survey measure of poverty risk. Separate multilevel regression models attempting to explain the observed interviewer variance in observation accuracy for two types of households (those at risk and not at risk of poverty) using case-level strategies and aggregate interviewer tendencies reveal unique strategies that result in more accurate observations for each type of household. An aggregate fixed-effects model then reveals strategies that prove to be effective regardless of the type of household when accounting for unobserved interviewer heterogeneity.
Abstract Research has shown that interviewers can significantly affect survey respondents' reported attitudes and behaviors. Several interviewer characteristics have been found to partially explain variation in respondents' answers across interviewers, particularly when questions are related to interviewers' observable characteristics such as gender, race, and age. However, less is known about if and how interviewers' religious appearance and religious attitudes affect survey responses and, more specifically, reports about religious attitudes. Collecting accurate information on religious attitudes is important, given the sensitivity of this information across the globe and the growing interest in understanding religious perceptions and misconceptions. This paper is the first to investigate (a) the independent effects and the interplay between interviewers' religious veil status and interviewers' religious attitudes on respondents' reported religious attitudes and (b) the magnitude of the interviewer variance explained by interviewers' religious characteristics. The data comes from a nationally representative survey of religious and political attitudes in Tunisia carried out in 2013. Data from the survey also includes information about interviewers' characteristics (including veil status for females) and interviewers' own religious attitudes based on their responses to the same survey questions asked of respondents. Results showed that respondents interviewed by veiled female interviewers reported greater religiosity than respondents interviewed by unveiled female interviewers. Equally important were interviewers' religious attitudes, which also independently affected the corresponding attitudes of respondents and explained a substantial percentage of the between-interviewer variance for several outcomes. The effect of interviewers' attitudes on respondents' attitudes was not stronger among veiled interviewers. Our investigation also revealed that the effect of interviewers' attitudes on respondents' reported attitudes operated somewhat differently for male and female respondents depending on the specific survey items. Future studies are needed to explore the mechanism(s) underlying these effects.