Ethnography in education evaluation
In: Evaluation and Program Planning, Band 9, Heft 2, S. 180-183
16828 Ergebnisse
Sortierung:
In: Evaluation and Program Planning, Band 9, Heft 2, S. 180-183
In: New directions for program evaluation: a quarterly sourcebook, Band 1978, Heft 2, S. 69-79
ISSN: 1534-875X
AbstractNumerous problems must be faced in the implementation of a comprehensive evaluation system for vocational education programs, given the state of the art in vocational education evaluation and the extensive evaluation requirements and activities associated with programs funded by the Vocational Education Act of 1963.
In: Evaluation and Program Planning, Band 33, Heft 2, S. 194-196
In: Evaluation and program planning: an international journal, Band 33, Heft 2
ISSN: 0149-7189
In: New directions for evaluation: a publication of the American Evaluation Association, Band 2006, Heft 109, S. 105-108
ISSN: 1534-875X
AbstractThis chapter summarizes the main points made in the preceding chapters, highlights themes running throughout the chapters, and suggests implications for the fields of STEM education evaluation and evaluation overall.
In: Evaluation review: a journal of applied social research, Band 12, S. 483-509
ISSN: 0193-841X, 0164-0259
In: Evaluation review: a journal of applied social research, Band 6, Heft 4, S. 443-480
ISSN: 0193-841X, 0164-0259
In: Evaluation review: a journal of applied social research, Band 6, Heft 4, S. 443-480
ISSN: 1552-3926
Meta-analysis of several hundred evaluations of Title I compensatory education programs shows that two distinct research designs consistently yield different results. The norm- referenced model portrays programs as positively effective while the regression-disconti nuity design shows them to be ineffective or even slightly harmful. Three potential biasing factors are discussed for each design—residual regression artifacts; attrition and time-of- testing problems in the norm-referenced design; and assignment, measurement, and data preparation problems in the regression-discontinuity design. In lieu of more definitive research the tentative conclusion is that in practice the norm-referenced design over estimates the program effect while the regression-discontinuity design underestimates it.
In: Evaluation review: a journal of applied social research, Band 12, Heft 5, S. 483-509
ISSN: 0193-841X, 0164-0259
In: Evaluation review: a journal of applied social research, Band 12, Heft 5, S. 483-509
ISSN: 1552-3926
Bilingual education has had a complex and controversial history in the United States. Bilingual education evaluators have been hampered by a lack of administrative support, a controversial political environment, and numerous technical difficulties. Since bilingual education programs are quite complex from an evaluation standpoint, it is not at all obvious how evaluators should best proceed in order to design and conduct useful evaluations. This article reviews part of the history of bilingual education and its implications for evaluation practice. Technical problems in conducting a bilingual evaluation are identified and strategies for coping with these problems are discussed. These strategies are based on the evaluation design that has been developed and implemented statewide in Connecticut. Finally, strategies for improving evaluation capabilities and for using evaluation results at the federal, state, and local levels are presented.
In: Journal of policy analysis and management: the journal of the Association for Public Policy Analysis and Management, Band 3, Heft 2, S. 299
ISSN: 1520-6688
In: Journal of policy analysis and management: the journal of the Association for Public Policy Analysis and Management, Band 3, Heft 2, S. 299
ISSN: 0276-8739
In: Journal of policy analysis and management: the journal of the Association for Public Policy Analysis and Management, Band 3, Heft 2, S. 299-305
ISSN: 0276-8739
In: New directions for evaluation: a publication of the American Evaluation Association, Band 2016, Heft 151, S. 11-20
ISSN: 1534-875X
AbstractEvaluative practice has a long and deep history in higher education. It has been a persistent part of instructional practice and curriculum, intricately entwined with scholarly efforts to address teaching and learning. From public policy and oversight perspectives, the questions of value and worth have often focused on inputs—faculty credentials, facilities, etc.—as well as fiscal responsibility. But in the last 30 years, attention has turned to student learning as a critical outcome and the assessment of learning as a principal endeavor. The developments in higher education assessment have involved increasingly sophisticated psychometric approaches to measurement as well as more teacherly orientations to the implementation of educational assessments within the individual contexts—and intentions—of colleges and universities. In this chapter, we introduce some of the issues in the field and argue that evaluation has a unique history that is committed to systematically bringing evidence of program outcomes and processes into the discourse of educators—administrators, faculty, and staff—as they examine and build on their own operations. We briefly review the current context and challenges and support increased evaluator–faculty collaboration. We make a case for how the analysis of evaluation practices in higher education is both a means to increasing expertise in those applications and to thinking about evaluation practices across developing and complex institutions.
In: New directions for evaluation: a publication of the American Evaluation Association, Band 2016, Heft 151, S. 85-96
ISSN: 1534-875X
AbstractThis chapter outlines the implementation of a pilot project to evaluate courses and an education program with pre‐post designs in an undergraduate psychology program. This plan of evaluation incorporates a variety of pre‐post test designs to measure the change in student performance across different contexts—including traditional and online courses—rather than rely solely on outcome measures without controls as is common in higher education evaluation. In addition to building on the essential value of this design, this chapter examines a largely faculty‐directed evaluation program, which examines the need for expertise in instrument development and pedagogical scholarship, and demonstrates the importance of administrative support.