Suchergebnisse
Filter
Format
Medientyp
Sprache
Weitere Sprachen
Jahre
1862 Ergebnisse
Sortierung:
Personality and Uninformed Response Error
In: The Journal of social psychology, Band 126, Heft 1, S. 37-45
ISSN: 1940-1183
Modelling response error in school effectiveness research
In: Statistica Neerlandica: journal of the Netherlands Society for Statistics and Operations Research, Band 58, Heft 2, S. 138-160
ISSN: 1467-9574
Statistical modelling of school effectiveness in educational research is considered. Variance component models are generally accepted for the analysis of such studies. A shortcoming is that outcome variables are still treated as measured without an error. Unreliable variables produce biases in the estimates of the other model parameters. The variability of the relationships across schools and the effects of schools on students' outcomes differ substantially when taking the measurement error in the dependent variables of the variance component models into account. The random effects model can be extended to handle measurement error using a response model, leading to a random effects item response theory model. This extended random effects model is in particular suitable when subjects are measured repeatedly on the same outcome at several points in time.
Response Error in Self-Reported Recreation Participation
In: Journal of leisure research: JLR, Band 16, Heft 4, S. 322-329
ISSN: 2159-6417
Time Sequence and the Response Error
In: The sociological review, Band 6, Heft 2, S. 229-239
ISSN: 1467-954X
Response Error in Earnings Functions for Nonblack Males
In: Sociological methods and research, Band 6, Heft 2, S. 241-280
ISSN: 1552-8294
Biases due to measurement errors in an earnings function for nonblack males are assessed by estimating unobserved variable models with data from the Income Supplement Reinterview program of the March 1973 Current Population Survey and from the remeasurement program of the 1973 Occupational Changes in a Generation-II survey. We find that reports of social origins, educational and occupational attainments, labor supply, and earnings of nonblack males are subject to primarily random response errors. Logarithmic earnings is one of the most accurately measured indicators of socioeconomic success. Further, retrospective reports of status variables are as reliable as contemporaneous reports. When measurement errors are ignored for nonblacks, the total economic return to schooling is underestimated by about 16% and the effects of some background variables are underestimated by as much as 15%. The total effects offirst and current job status are underestimated by about 20% when measurement errors are ignored, as are the unmediated effects of current job status. Conflicting evidence is presented on whether respondents tend to understate the consistency between their earnings and educational attainments in the Current Population Survey. If there is such a tendency, unmediated effects of education are modestly understated when response errors are ignored, and they are overstated if no such tendency exists.
Adjusting for Response Error in Panel Surveys: A Latent Class Approach
In: Sociological methods and research, Band 17, Heft 1, S. 65-92
ISSN: 1552-8294
Estimation of the current distribution of labor force characteristics, as well as individual-level changes in these characteristics, is threatened in the Current Population Survey (CPS) by "rotation group bias." Similar problems are likely to arise in other surveys that use a rotating panel format (e.g., the Survey of Income and Program Participation—SIPP) or are forced to administer questionnaires in different formats from wave to wave for some fraction of the sample. This article presents an analysis of response error (misclassification error) in the CPS that reconciles observed differences among rotation groups, and we propose that the same general approach can be used to model response bias in other panel surveys such as SIPP. It is hypothesized that the greater prevalence of unemployment observed in the initial CPS interview arises from errors in classifying individuals into labor force statuses. Multiple-group latent class models are developed for the November 1979 CPS file that estimate (1) the "true" prevalence of each labor force status, (2) the prevalence of misclassification, (3) the relationship between true labor force status and the type of interview conducted (i.e., telephone versus face-to-face interview, self versus proxy responses), and (4) the difference between the error structures observed in the first rotation group and those found in other groups. Results suggest that the true prevalence of labor force statuses is constant over rotation groups once response errors have been accounted for.
Non-response error versus measurement error: A dilemma when using mail questionnaires for election studies
In: Australian journal of political science: journal of the Australasian Political Studies Association, Band 41, Heft 1, S. 107-118
ISSN: 1363-030X
Non-response Error versus Measurement Error: A Dilemma when Using Mail Questionnaires for Election Studies
One strategy used to increase response rates when using mail questionnaires is to prolong the period of data collection. This paper examines the consequences of such a strategy on the Australian Election Studies (1996-2004). The findings suggest that such a strategy has both positive and negative consequences on the overall quality of the survey. On the one hand, prolonging the period of data collection decreases the non-response error by providing a somewhat better representation of hard-to-reach groups of respondents such as the young, full-time employed, and those disinterested in politics. On the other hand, prolonging the period of data collection increases the risk of measurement error. The evidence suggests that post-election events significantly impact on respondents' political opinions. More specifically, the longer respondents wait to return their questionnaire after the elections the more positive their opinions of winning leaders, the more negative their opinions of defeated leaders and the more distant they feel with the policies of defeated parties. Australian investigators thus face a dilemma in deciding when to stop the fieldwork period for their mail election surveys: they must choose between non-response error and measurement error.
BASE
Non-response Error versus Measurement Error: A Dilemma when Using Mail Questionnaires for Election Studies
One strategy used to increase response rates when using mail questionnaires is to prolong the period of data collection. This paper examines the consequences of such a strategy on the Australian Election Studies (1996-2004). The findings suggest that such a strategy has both positive and negative consequences on the overall quality of the survey. On the one hand, prolonging the period of data collection decreases the non-response error by providing a somewhat better representation of hard-to-reach groups of respondents such as the young, full-time employed, and those disinterested in politics. On the other hand, prolonging the period of data collection increases the risk of measurement error. The evidence suggests that post-election events significantly impact on respondents' political opinions. More specifically, the longer respondents wait to return their questionnaire after the elections the more positive their opinions of winning leaders, the more negative their opinions of defeated leaders and the more distant they feel with the policies of defeated parties. Australian investigators thus face a dilemma in deciding when to stop the fieldwork period for their mail election surveys: they must choose between non-response error and measurement error.
BASE
Panel survey assessment of elapsed time response error in travel spending measurement
In: Journal of hospitality & leisure marketing: the international forum for research, theory & practice, Band 1, Heft 1, S. 39-50
ISSN: 1541-0897
Non-response error versus measurement error: A dilemma when using mail questionnaires for election studies
In: Australian journal of political science: journal of the Australasian Political Studies Association, Band 41, Heft 1, S. 107-118
ISSN: 1036-1146
A Scent of Strategy: Response Error in a List Experiment on Anti-Immigrant Sentiment
In: Methods, data, analyses: mda ; journal for quantitative methods and survey methodology, Band 18, Heft 2, S. 249-262
ISSN: 2190-4936
This Research Note reports on a list experiment regarding anti-immigrant sentiment (n=1,965) that was fielded in Spain in 2020. Among participants with left-of-center ideology, the experiment originated a negative difference-in-means between treatment and control. Drawing on Zigerell's (2011) deflation hypothesis, we assess the possibility that leftist treatment group respondents may have altered their scores by more than one to distance themselves unmistakably from the sensitive item. We consider this possibility plausible in a context of intense polarization where immigration attitudes are closely associated with political ideology. This study's data speak to the results of recent meta-analyses that have revealed list-experiments to fail when applied to prejudiced attitudes and other highly sensitive issues - i.e., precisely the kind of issues with regard to which the technique ought to work best. We conclude that the possibility of strategic response error in specific respondent categories needs to be considered when staging and interpreting list experiments.
Narratives of Response Error From Cognitive Interviews of Survey Questions About Normative Behavior
In: Sociological methods and research, Band 46, Heft 3, S. 540-564
ISSN: 1552-8294
That rates of normative behaviors produced by sample surveys are higher than actual behavior warrants is well evidenced in the research literature. Less well understood is the source of this error. Twenty-five cognitive interviews were conducted to probe responses to a set of common, conventional survey questions about one such normative behavior: religious service attendance. Answers to the survey questions and cognitive probes are compared both quantitatively and qualitatively. Half of the respondents amended their answer during cognitive probing, all amendments indicating a lower rate of attendance than originally reported, yielding a statistically significant reduction in reported attendance. Narrative responses shed light onto the source of bias, as respondents pragmatically interpreted the survey question to allow themselves to include other types of religious behavior, to report on a more religious past, and discount current constraints on their religious behavior, in order to report aspirational or normative religious identities.