Introduction to online surveys -- Developing the survey : questions and answers -- Ethical considerations -- Sampling -- Using a panel in your research -- Comparative survey research -- Incentives for respondents -- Selecting survey software -- Programming the survey -- Fieldwork -- Processing and cleaning the data -- Weighting survey data -- Reporting survey results -- Making data available to others -- The future of Web surveys
Research on mixed devices in web surveys is in its infancy. Using a randomized experiment, we investigated device effects (desktop PC, tablet and mobile phone) for six response formats and four different numbers of scale points. N = 5,077 members of an online access panel participated in the experiment. An exact test of measurement invariance and Composite Reliability were investigated. The results provided full data comparability for devices and formats, with the exception of continuous Visual Analog Scale (VAS), but limited comparability for different numbers of scale points. There were device effects on reliability when looking at the interactions with formats and number of scale points. VAS, use of mobile phones and five point scales consistently gained lower reliability. We suggest technically less demanding implementations as well as a unified design for mixed-device surveys.
With the rise of mobile surveys comes the need for shorter questionnaires. We investigate the modularization of an existing questionnaire in the Longitudinal Internet Study for the Social Sciences (LISS) Panel in the Netherlands. We randomly divided respondents into a normal length survey condition, a condition where the same survey was split into 3 parts, and a condition where the survey was split into 10 parts. Respondents received the parts consecutively at regular intervals over a 1-month period. We discuss response rates, data quality measures, and respondents' evaluation of the questionnaire. Our results indicate higher start rates when the survey is cut into smaller parts but also higher dropout rates. However, the fraction of missing information is lower in the 3- and 10-part conditions. More respondents use their mobile phone for survey completion when the survey is shorter. We find fewer item missings and satisficing in shorter surveys. We find no effect on neutral and extreme responding nor on estimates of the validity of answers. People with low and high education and young and old evaluate shorter surveys better than the normal length survey.
Respondents in an Internet panel survey can often choose which device they use to complete questionnaires: a traditional PC, laptop, tablet computer, or a smartphone. Because all these devices have different screen sizes and modes of data entry, measurement errors may differ between devices. Using data from the Dutch Longitudinal Internet Study for the Social sciences panel, we evaluate which devices respondents use over time. We study the measurement error associated with each device and show that measurement errors are larger on tablets and smartphone than on PCs. To gain insight into the causes of these differences, we study changes in measurement error over time, associated with a switch of devices over two consecutive waves of the panel. We show that within individuals, measurement errors do not change with a switch in device. Therefore, we conclude that the higher measurement error in tablets and smartphones is associated with self-selection of the sample into using a particular device.
"Straightlining, an indicator of satisficing, refers to giving the same answer in a series of questions arranged on a grid. We investigated whether straightlining changes with respondents' panel experience in two open-access Internet panels in the Netherlands: the LISS and Dutch Immigrant panels. Specifically, we considered straightlining on 10 grid questions in LISS core modules (7 waves) and on a grid of evaluation questions in both the LISS panel (150+ waves) and the Dutch immigrant panel (50+ waves). For both core modules and evaluation questions we found that straightlining increases with respondents' panel experience for at least three years. Straightlining is also associated with younger age and non-western 1st generation immigrants. Where straightlining was a plausible set of answers, prevalence of straightlining was much larger (15-40%) than where straightlining was implausible (<2% in wave 1)." (author's abstract)
This article reports from a pilot study that was conducted in a probability-based online panel in the Netherlands. Two parallel surveys were conducted: one in the traditional questionnaire layout of the panel and the other optimized for mobile completion with new software that uses a responsive design (optimizes the layout for the device chosen). The latter questionnaire was optimized for mobile completion, and respondents could choose whether they wanted to complete the survey on their mobile phone or on a regular desktop. Results show that a substantive number of respondents (57%) used their mobile phone for survey completion. No differences were found between mobile and desktop users with regard to break offs, item nonresponse, time to complete the survey, or response effects such as length of answers to an open-ended question and the number of responses in a check-all-that-apply question. A considerable number of respondents gave permission to record their GPS coordinates, which are helpful in defining where the survey was taken. Income, household size, and household composition were found to predict mobile completion. In addition, younger respondents, who typically form a hard-to-reach group, show higher mobile completion rates.
Respondents follow simple heuristics in interpreting the visual features of questions. The authors carried out two experiments in two panels to investigate how the effect of visual heuristics affects the answers to survey questions. In the first experiment, the authors varied the distance between scale points in a 5-point scale to investigate whether respondents use the conceptual or visual midpoint of a scale. In the second experiment, the authors used different end point labels of a 5-point scale, by adding different shadings of color and numbers that differed both in sign and value (2 to -2), to study whether options that are similar of appearance are considered conceptually closer than when they are dissimilar in appearance. The authors predicted that there is a hierarchy of features that respondents attend to, with verbal labels taking precedence over numerical labels, and numerical labels taking precedence over visual cues. The results confirmed the hypothesis: the effect of spacing of response options and different end points was only apparent in polar point scales and not in fully labeled scales. In addition, this study on two panels, with one consisting of extremely trained respondents and the other of relatively fresh respondents, shows that trained respondents are affected by the distance between response options whereas relatively new respondents are not. To reduce the effect of visual cues, taking into account the robustness of results, the authors suggest it is better to use fully labeled 5-point scales in survey questions.
The goal of this research was to determine the best way to present mixed-device surveys. We investigate the effect of survey method (messenger versus regular survey), answer scale, device used, and personal characteristics such as gender, age and education on break-off rate, substantive answers, completion time and respondents' evaluation of the survey. Our research does not suggest that a messenger survey affects mixed-device surveys positively. Further research is necessary to investigate how to optimally present mixed-device surveys in order to increase participation and data quality.