Strengthening unemployment insurance: program improvements
In: Studies in unemployment insurance
10818 Ergebnisse
Sortierung:
In: Studies in unemployment insurance
In: Evaluation in practice series
This book, from SAGE's Evaluation in Practice series, considers variants of experimental evaluation designs, including those that are not commonly used but could be with much greater frequency. It also includes instructions for how to set up such experiments within program processes to learn about the effects of improvement efforts.
In: The journal of business, Band 49, Heft 3, S. 431
ISSN: 1537-5374
In: Evaluation journal of Australasia: EJA, Band 12, Heft 2, S. 4-14
ISSN: 2515-9372
A common evaluation purpose is to determine whether a policy or program was implemented as intended: referred to as formative evaluation, process evaluation, or evaluating program improvement. A well-designed formative evaluation is important in: detecting program drift; providing timely feedback to program staff to make cost-saving mid-course corrections; reassuring the sponsor that quality assurance measures are implemented to protect investments; and interpreting impact/outcome evaluation. A formative evaluation should not just gather data on deviations from an anticipated course of action, but provide recommendations for improvement. Current methods for program improvement vary in their ability to solicit targeted recommendations. Root cause analysis (RCA) is a well-established, robust methodology used in a variety of disciplines. RCA has been primarily used by evaluators operating from a theory-driven orientation to evaluate the merit and worth of a program or policy. Surprisingly, a review of the literature suggests that RCA's utility as a program improvement tool has remained largely unrecognised in evaluation. This article illustrates the application of RCA in evaluating program improvement. The conditions under which RCA might be preferred over other formative evaluation methods are discussed.
In: New directions for evaluation: a publication of the American Evaluation Association, Band 2008, Heft 117, S. 59-70
ISSN: 1534-875X
AbstractThe impact of No Child Left Behind (NCLB) is usually understood in relation to schools and districts, but the legislation has also affected community‐based organizations that operate school‐linked programs. This case study of an afterschool program in California demonstrates how educational accountability systems that emphasize students' academic achievement and scientifically based research prompted evaluators to modify evaluation questions, methods, and analytic techniques. The external demands of NCLB transformed the evaluation to support the relevance and value of this community‐based program within the evolving framework of accountability for the school. © Wiley Periodicals, Inc.
In: New directions for evaluation: a publication of the American Evaluation Association, Band 2009, Heft 121, S. 27-42
ISSN: 1534-875X
AbstractThe authors tell about their heterogeneous 91 person research and evaluation unit at an operating foundation in St. Paul, Minnesota. They focus on evaluation for program improvement, one of several purposes of studies they work on. The three authors write from their different manager positions within the unit. Included are the context of the organization, strategic principles of their work and how it fits into a program improvement model. Mattessich writes as the executive director and details his day‐to‐day evaluation work. Mueller writes as the associate director, who oversees most of the unit's evaluation work, while Holm‐Hansen writes as a consulting scientist who leads a core research team of four to six. © Wiley Periodicals, Inc.
In: International Journal of Research in Business and Social Science: IJRBS, Band 9, Heft 5, S. 179-191
ISSN: 2147-4478
The results of an evaluation should be used for the envisioned goal and the evaluation process and/or outcomes should be used in practice and decision making. This article presents research whose objective was to establish the extent to which stakeholder involvement in evaluations impacts the utilization of evaluation findings for program improvement. Guided by the pragmatic paradigm and supported by the Utilization-Focused Evaluation Model and Knowledge Use Theory, the researchers assumed a descriptive and correlational design using mixed methods. The sample size for this study was 232 project staff from Non-Governmental Organizations (NGOs) in Kisumu Central Sub-County, Kenya. To analyze qualitative data, the open-ended responses from key informant interviews were recorded and coded appropriately for further analysis for themes through content analysis and comparative analysis. Frequencies and percentages were calculated to describe the basic characteristics of the quantitative data. To ensure the validity and reliability of the research instruments, pilot testing was conducted. Cronbach's alpha at ? = 0.908 was attained as the reliability coefficient of the pre-test instruments. Tests of statistical assumptions were carried out before data analysis to avoid invalidation. A hypothesis was tested at the ? = .05 level of significance and was rejected. The findings demonstrate that there is a significant relationship between stakeholder involvement in evaluations and the utilization of evaluation results. This research, therefore, reinforces literature and helps to understand the ways in which stakeholder involvement in evaluations influences the utilization of evaluation results. It informs the evaluation field of study, fills gaps in the evaluation use literature, and contributes to the appreciation of factors that predict and enhance the utilization of evaluation results
In: Evaluation review: a journal of applied social research, Band 46, Heft 5, S. 469-516
ISSN: 1552-3926
Background: This article offers a case example of how experimental evaluation methods can be coupled with principles of design-based implementation research (DBIR), improvement science (IS), and rapid-cycle evaluation (RCE) methods to provide relatively quick, low-cost, credible assessments of strategies designed to improve programs, policies, or practices. Objectives: This article demonstrates the feasibility and benefits of blending DBIR, IS, and RCE practices with embedded randomized controlled trials (RCTs) to improve the pace and efficiency of program improvement. Research design: This article describes a two-cycle experimental test of staff-designed strategies for improving a workforce development program. Youth enrolled in Year Up's Professional Training Corps (PTC) programs were randomly assigned to "improvement strategies" designed to boost academic success and persistence through the 6-month learning and development (L&D) phase of the program, when participants spend most of their program-related time in courses offered by partner colleges. Subjects: The study sample includes 317 youth from three PTC program sites. Measures: The primary outcome measures are completion of the program's L&D phase and continued college enrollment beyond the L&D phase. Results: The improvement strategies designed and tested during the study increased program retention through L&D by nearly 10 percentage points and increased college persistence following L&D by 13 percentage points. Conclusion: Blending DBIR, IS, and RCE principles with a multi-cycle RCT generated highly credible estimates of the efficacy of the tested improvement strategies within a relatively short period of time (18 months) at modest cost and with reportedly low burden for program staff.
In: Asia-Pacific journal of risk and insurance: APJRI, Band 2, Heft 2
ISSN: 2153-3792
In: Journal of public child welfare, Band 9, Heft 1, S. 42-64
ISSN: 1554-8740
Testimony issued by the General Accounting Office with an abstract that begins "This testimony discusses the Department of Agriculture's (USDA) farm loan programs, which are run by the Farm Service Agency (FSA). GAO (1) provides an overview of the financial condition of FSA's farm loan portfolio as of September 30, 2000 and (2) explains its decision to remove the farm programs from its high-risk list. GAO found that FSA had more than $16.6 billion in outstanding farm loans as of September 30, 2000; direct loans accounted for slightly more than half of this amount and guaranteed loans for slightly less than half. Of the $16.6 billion, about $2.1 billion was owed by borrowers who were delinquent on repaying their FSA loans. Most (87 percent) of the $2.1 billion was owed on direct farm loans. Although the total amount due on the problem loans remains high, this financial position reflects improvement in FSA's direct loan portfolio in recent years as well as a continuation of a relatively healthy guaranteed loan portfolio. In January 2001, GAO removed FSA's farm loan programs from its high-risk list. Several actions taken by Congress and USDA, many of which GAO recommended, have significantly improved the operation and condition of USDA's farm loan programs."
BASE
In: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5051981/
The RAND Program Manager's Guide is a tool to help assess program performance, consider options for improvement, implement solutions, then assess how well the changes worked, with the intention of helping those responsible for managing or implementing programs.
BASE
In: Tax Notes International, Band 51, Heft 1
SSRN
In: Evaluation and Program Planning, Band 48, S. 83-89
In: R 4282
In: NCRVE/UCB
In: Rand Library collection