Implementation Capacity and Evaluation Capacity
In: Oxford Research Encyclopedia of Politics
"Implementation Capacity and Evaluation Capacity" published on by Oxford University Press.
5814 Ergebnisse
Sortierung:
In: Oxford Research Encyclopedia of Politics
"Implementation Capacity and Evaluation Capacity" published on by Oxford University Press.
In: World futures review: a journal of strategic foresight, Band 11, Heft 4, S. 287-291
ISSN: 2169-2793
The subject of evaluating foresight work has been around for almost as long as the professional practice itself has, but the field has done little to move closer to a systematic evaluation of its work. This special issue marks the second collection of articles on that project after a special issue of Futures in 2012 (Van Der Duin and Van Der Martin 2012). This issue takes a three-part approach: Part 1: evaluation of foresight in general and evaluation approaches and methods that can support designing an appropriate evaluation; Part 2: evaluation of foresight work in organizations and its impact on long-term thinking and decision-making; and Part 3: evaluation of specific foresight activities—an undergraduate learner foresight experience and a health sector scenario development exercise. The foreword ends with a reflection on the continuing issue of foresight and evaluation.
In: Evaluation: the international journal of theory, research and practice, Band 23, Heft 1, S. 24-41
ISSN: 1461-7153
Ex-post evaluations are a potential tool to improve regulatory interventions and to hold rule-makers accountable. For these reasons the European Commission has promised to systematically evaluate its legislation, but it remains unclear if actual evaluation capacity is being built up in the Commission's Directorates-General. This article describes and explains the variation in evaluation capacity between the Directorates-Generals by applying a theoretical model of evaluation capacity developed by Nielsen et al. to the European context. To gain an in-depth understanding of the Directorates-Generals' evaluation capacity, 20 Commission officials were interviewed. The results show that there is much variation in the extent to which Directorates-Generals prioritize evaluation as well as in the amount of human and technological capital that they invest in evaluation. Further analysis using fuzzy-set Qualitative Comparative Analysis reveals that part of this variation can be explained by the Directorates-Generals' total budgets, suggesting that Directorates-Generals with a tradition of evaluating spending programmes also attach more importance to legislative evaluations.
In: New directions for evaluation: a publication of the American Evaluation Association, Band 2008, Heft 120, S. 55-69
ISSN: 1534-875X
AbstractEvaluation capacity building, or ECB, is an area of great interest within the field of evaluation as well as in Extension evaluation. Internal Extension evaluators have long offered training and technical assistance to help Extension educators conduct evaluation. Today ECB in Extension encompasses myriad activities and processes to advance evaluation practice and evaluative thinking. They can be described in a three‐component framework: professional development, resources and supports, and organizational environment. This chapter describes the Extension experience, highlighting practices and challenges within each component, and presents the case of logic model dissemination as an illustration. The authors discuss the distinction between evaluator and ECB practitioner and call for clarity in purpose, role delineation, and expectations. They include a simple logic model for evaluating ECB, which focuses on linking ECB investments to individual, team, program, and organizational change. The authors conclude with a list of their own learnings and reflections as they have faced the challenges and many rewards of building evaluation capacity in complex organizations. © Wiley Periodicals, Inc.
In: Evaluation: the international journal of theory, research and practice, Band 9, Heft 3, S. 365-369
ISSN: 1461-7153
In: New directions for evaluation: a publication of the American Evaluation Association, Band 2012, Heft 134, S. 93-101
ISSN: 1534-875X
AbstractThis chapter shares lessons learned from evaluation capacity‐building efforts undertaken by PREVAL in recent years (2004–2010). The author also discusses the aim of capacity building in evaluation within PREVAL, the key elements that should be considered in planning, monitoring, and evaluation (PME) systems, and the strategic uses of its processes and findings. Finally, the chapter outlines lessons learned regarding the role of stakeholders, and factors for success and failure identified from PREVAL's experience in building evaluation capacity in antipoverty rural projects in Latin America and the Caribbean. ©Wiley Periodicals, Inc., and the American Evaluation Association.
In: New directions for evaluation: a publication of the American Evaluation Association, Band 2007, Heft 116, S. 45-59
ISSN: 1534-875X
AbstractOver time, intentional process use can have the practical effect of building the evaluation capacity of an organization. This chapter outlines possible steps that take purposeful advantage of the evaluation process.
In: Journal of MultiDisciplinary Evaluation: JMDE, Band 2, Heft 3, S. 78-112
ISSN: 1556-8180
This paper documents a process of evaluation capacity building in a humanitarianorganization in Afghanistan between 2001 and 2003. The authors carried out an annual evaluation and they undertook evaluation capacity building activities. The analysis of the empirical data shows that in the context of humanitarian organizations, the capacity building process would be improved if it would i) employ a mix of participative and utilization-focused approach, ii) organize participative workshops and on-the-job training, with the continuity of collaborators ensured, iii) use a myriad of dissemination/advocacy activities for a varied public.
In: CEval-Arbeitspapier, Band 21
In: Evaluation: the international journal of theory, research and practice, Band 28, Heft 2, S. 231-251
ISSN: 1461-7153
While "systemic thinking" is popular in the context of capacity development and evaluation, there is currently a lack of understanding about the benefits to employing systems theory in evaluation capacity development. Systems theory provides a useful orientation to the work involved in complex systems (e.g. national evaluation systems). This article illustrates how evaluation capacity development practitioners can use systems theory as a conceptual tool to gain a better understanding of the functional aspects and interrelationships present within a given evaluation system. Specifically, the systems theory perspective can help elucidate the reasons for the success or failure of a given evaluation capacity development program or activity. With the goal of motivating evaluation capacity development practitioners to use systems theory in their work, this article presents a systems theory framework for evaluation capacity development and offers practical examples of how it can be adopted.
In: Journal of youth development: JYD : bridging research and practice, Band 7, Heft 1, S. 24-34
ISSN: 2325-4017
A pilot program mentoring youth professionals through "learning-by-doing" projects yielded consistent increases in evaluation knowledge and skills over three years. Self-assessed skill improvements were greatest for preparatory processes (planning, focusing, design, selecting methods) and reporting competencies that are more often emphasized in organizational evaluation requirements. Smaller increases in data collection and analysis skills were also perceived by participating youth professionals. Focus groups with each of six evaluation "learning circle" groups revealed benefits of participation in the evaluation "learning circle," as well as needs for evaluation training and tools, and challenges faced within the organizational culture.
In: Comparative policy analysis series
In: Evaluation journal of Australasia: EJA, Band 5, Heft 2, S. 41-47
ISSN: 2515-9372
Evaluation capacity-building entails not only developing the expertise needed to undertake robust and useful evaluations; it also involves creating and sustaining a market for that expertise by promoting an organisational culture in which evaluation is a routine part of 'the way we do things around here'. A challenge for evaluators is to contribute to evaluation capacity-building while also fulfilling their key responsibilities to undertake evaluations. A key strategy is to focus on both discerning value and adding value for clients/commissioners of evaluations. This paper takes as examples two related internal evaluation projects conducted for the Queensland Police Service that have added value for the client and, in doing so, have helped to promote and sustain an evaluation culture within the organisation. It describes key elements of these evaluations that contributed to evaluation capacity-building. The paper highlights the key role that evaluators themselves, especially internal evaluators, can take in evaluation capacity-building, and proposes that internal evaluators can, and should, integrate evaluation capacity-building into their routine program evaluation work.
In: Evaluation and program planning: an international journal, Band 26, Heft 2, S. 133-142
ISSN: 1873-7870
In: University of British Columbia. SCARP Graduating Projects
Planning and land use decisions have public health consequences. Community health is affected by development plans and projects, and there is no global framework to evaluate these health impacts. There are several assessment methods that are being developed and used in recent decades. One method is health impact assessment (HIA). HIAs evaluate health determinants, such as housing affordability and walkability, and the potential positive and negative health outcomes associated with changes in these health determinants. The output of an HIA is analysis is the net impact of a plan and mitigation recommendations. This report was undertaken with support from Metro Vancouver planning staff to provide further research on the topic of HIAs and how they are being undertaken in the region. The research consists of a literature review on health assessment frameworks, a case study of seven North American HIAs, and an HIA questionnaire to local government staff in the Metro Vancouver region. This report addresses the state of health assessment in Metro Vancouver. It answers the following questions: • What are the driving forces and barriers behind undertaking HIAs? • To what extent are HIAs being conducted in Metro Vancouver? • What can be changed to increase implementation of HIAs? ; Applied Science, Faculty of ; Community and Regional Planning (SCARP), School of ; Unreviewed ; Graduate
BASE