Unbundling evaluation use
In: Studies in educational evaluation, Band 29, Heft 1, S. 1-12
ISSN: 0191-491X
23703 Ergebnisse
Sortierung:
In: Studies in educational evaluation, Band 29, Heft 1, S. 1-12
ISSN: 0191-491X
In: Evaluation: the international journal of theory, research and practice, Band 18, Heft 1, S. 61-77
ISSN: 1461-7153
The use of evaluation results is at the core of evaluation theory and practice. Major debates in the field have emphasized the importance of both the evaluator's role and the evaluation process itself in fostering evaluation use. A recent systematic review of interventions aimed at influencing policy-making or organizational behavior through knowledge exchange offers a new perspective on evaluation use. We propose here a framework for better understanding the embedded relations between evaluation context, choice of an evaluation model and use of results. The article argues that the evaluation context presents conditions that affect both the appropriateness of the evaluation model implemented and the use of results.
In: Evidence & policy: a journal of research, debate and practice, Band 17, Heft 4, S. 661-687
ISSN: 1744-2656
Background: Evaluations are a useful tool to learn more about the effectiveness of public measures. In the era of evidence-based policymaking, recent research suggests that quality is an important determinant of the utilisation of evaluations. Despite this claim, hardly any empirical study has investigated whether the quality of an evaluation – measured by a meta-evaluation – influences its perceived utilisation by decision makers.
Aims and objectives: This article asks how the quality of an evaluation study is related to its perceived use, and investigates the relationship between the quality of an evaluation, assessed through a meta-evaluation, and how the evaluation is perceived and accepted by the parties concerned.
Methods: The basis for the empirical analyses were 34 external evaluations, conducted from 2006 to 2014, of upper secondary schools in the canton of Zurich, as well as a standardised survey conducted among 307 representatives of these schools (teachers, administrators, members of quality development teams, and the heads of school oversight commissions).
Findings: We conclude that the quality of the evaluation, as assessed in a meta-evaluation, is not particularly associated with the perception of evaluation quality and the perceived use of the evaluation. The perceived quality, however, is related to the perceived impact of an evaluation.
Discussion and conclusion: These findings are relevant for evaluation research and practice, since they show that the quality of an evaluation and evaluation use do not necessarily go hand in hand.
In: Journal of MultiDisciplinary Evaluation: JMDE, Band 19, Heft 46
ISSN: 1556-8180
This article aimed to identify the emphasis on evaluation use within evaluation competency frameworks. A review of evaluation competency frameworks shows an underlying focus on evaluation use in all frameworks. Nevertheless, specific competencies explicitly focusing on evaluation use were incorporated in more recent frameworks, reflecting the increasing attention to evaluation use. A theory of change for evaluation use is proposed depicting the role of competencies of evaluators and critical users. The article argues for more emphasis on directly use related competencies and extending the competencies beyond evaluators to users. It proposes establishing standardized and up-to-date evaluation training in academic institutions to professionalize evaluation and thereby promote the integration of evaluation in development interventions.
Keywords: competency frameworks, evaluation competencies, use of evaluation, evaluation training, development interventions.
In: New directions for evaluation: a publication of the American Evaluation Association, Band 2005, Heft 108, S. 47-55
ISSN: 1534-875X
AbstractThis chapter describes some of the challenges presented by nonformal education settings and discusses strategies for increasing evaluation use in nonformal education programs.
In: Evaluation and program planning: an international journal, Band 13, Heft 4, S. 389-394
ISSN: 1873-7870
In: New directions for evaluation: a publication of the American Evaluation Association, Band 2000, Heft 88, S. 25-37
ISSN: 1534-875X
AbstractEvaluation's role in facilitating transformative learning in organizations is the focus of this chapter. The theories underlying constructivist and transformative learning, as well as specific evaluator roles and practices to support it, are described.
In: Policy sciences: integrating knowledge and practice to advance human dignity, Band 55, Heft 2, S. 283-309
ISSN: 1573-0891
AbstractScientific evidence has become increasingly important for the decision-making processes in contemporary democracies. On the one hand, research dealing with the utilization of scientific knowledge in the political process has pointed out that decision-makers learn from evidence to improve policies to solve problems. On the other, scholars have underlined that actors learn from evidence to support their political interests regardless of how it affects the policy problem. One conventional insight from the policy learning literature is that higher salience of a policy issue makes it much less likely that decision-makers use evidence in an "unpolitical" way. Nevertheless, only few studies have investigated systematically how differences regarding issue salience between policy fields impact on how decision-makers learn from evaluations at the individual level. Using multilevel regression models on data from a legislative survey in Switzerland, this paper shows that salience and technical complexity of policy issues do not automatically lead to less policy learning and to more political learning from policy evaluations. Nevertheless, this article's empirical analysis also points out that issue salience increases policy learning from evaluations if the policy issue is technically complex. Our findings contribute to research on policy learning and evidence-based policy making by linking the literature on policy evaluation and learning, which helps analyzing the micro-foundations of learning in public policy and administration.
In: New directions for evaluation: a publication of the American Evaluation Association, Band 2000, Heft 88, S. 5-23
ISSN: 1534-875X
AbstractThis chapter recasts evaluation use in terms of influence and proposes an integrated theory that conceptualizes evaluation influence in three dimensions—source, intention, and time.
In: Evaluation: the international journal of theory, research and practice, Band 20, Heft 4, S. 428-446
ISSN: 1461-7153
This article investigates the European Union's evaluation system and its conduciveness to evaluation use. Taking the European Commission's LIFE programme as its case, the article makes an empirical contribution to an emerging focus in the literature on the importance of organization and institutions when analyzing evaluation use. By focusing on the European Union's evaluation system the article finds that evaluation use mainly takes place in the European Commission and less so in the European Parliament and the European Council. The main explanatory factors enabling evaluation use relate to the system's formalization of evaluation implementation and use; these factors ensure evaluation quality, timeliness and capacity in the Commission. At the same time, however, the system's formalization also impedes evaluation use, reducing the direct influence of evaluations on policy-making and effectively 'de-politicizing' programme evaluations and largely limiting their use to the level of programme management.
What effect do evaluation systems have on the use of evaluation? This is the research question guiding this PhD thesis. By answering this research question as well as three sub-questions, the thesis addresses three important gaps in the evaluation literature: the first gap is that evaluation theory does a poor job explaining non-use and justificatory uses of evaluation. The second gap is that evaluation theory does not account for the systemic context of evaluation in its more general explanations of evaluation use. Finally, the literature does not account empirically for the micro-level of evaluation use in evaluation systems. The thesis draws inspiration from organisational theory and in particular organisational institutionalism. Organisational institutionalism explains organisational action and change to be driven by legitimacy-seeking organisational behaviour. Organisations seek to legitimise themselves in order to survive in their environment. This theory is applied to the concept 'evaluation system'. Hence, the assumption underlying this thesis is that organisations within an evaluation system are using evaluations to appear accountable rather than improve policies. The thesis investigates the European Union's evaluation system with a particular focus on the European Commission. This is done in four articles. The first article is a theoretical article introducing organisational institutionalism to the evaluation literature in order to explain non-use and justificatory uses of evaluations. The second article is a historical analysis of the development of the European Commissions evaluation practices. The third article is a case-based analysis of evaluation use in the European Commission. The fourth article is also an empirical article on policy learning from evaluations in three different Directorate-Generals in the European Commission. The methodology used in the empirical articles is qualitative content analysis and the data were more than a hundred Commission documents and 58 interviews with Commission staff.
BASE
In: New directions for evaluation: a publication of the American Evaluation Association, Band 2006, Heft 112, S. 89-97
ISSN: 1534-875X
AbstractPerformance measurement can benefit evaluation by clarifying policy intent, program goals, and performance expectations and by facilitating systematic data collection. Because policymakers tend to support performance measurement for accountability purposes, linking performance measurement to evaluation has the potential of increasing evaluation use among policymakers.
In: New directions for evaluation: a publication of the American Evaluation Association, Band 1999, Heft 82, S. 57-66
ISSN: 1534-875X
AbstractThe authors review ethical dilemmas in evaluation that emerge from efforts to promote use within a context of stakeholder participation. Case examples illustrate two potentially problematic domains: stakeholder selection and depth of stakeholder involvement. Also discussed are mediating factors in the occurrence and resolution of these dilemmas.
In: Knowledge, Band 2, Heft 2, S. 237-262
In: New directions for evaluation: a publication of the American Evaluation Association, Band 2006, Heft 109, S. 87-103
ISSN: 1534-875X
AbstractThis chapter uses a program‐level evaluation to assess the characteristics of smaller‐scale project evaluations and their use for project improvement and accountability purposes.