Benchmarks and Citizen Judgments of Local Government Performance: Findings from a survey experiment
In: Public management review, Band 17, Heft 2, S. 288-304
ISSN: 1471-9037
99 Ergebnisse
Sortierung:
In: Public management review, Band 17, Heft 2, S. 288-304
ISSN: 1471-9037
In: The American review of public administration: ARPA, Band 44, Heft 3, S. 324-338
ISSN: 1552-3357
In: Nonprofit and voluntary sector quarterly: journal of the Association for Research on Nonprofit Organizations and Voluntary Action, Band 43, Heft 5, S. 910-925
ISSN: 1552-7395
In: American review of public administration: ARPA, Band 44, Heft 3, S. 324-338
ISSN: 0275-0740
In: Public management review, Band 17, Heft 2, S. 288-304
ISSN: 1471-9045
In: Nonprofit and voluntary sector quarterly: journal of the Association for Research on Nonprofit Organizations and Voluntary Action, Band 43, Heft 5, S. 910-925
ISSN: 1552-7395
Many nonprofits rely on private donations and government grants, but it is still unclear how these sources of funding may interact or even influence each other. To examine the behavioral aspect of the crowding-out hypothesis, we conducted an online survey experiment ( n = 562) to test if government funding of a hypothetical nonprofit would influence donations. Our results show that a nonprofit with government funding, compared to an identical hypothetical organization without government funding, received 25% less in average donations (US$35 vs. US$47) and was about half as likely (21% vs. 38%) to receive all the money in a forced-choice scenario. However, the crowding-out effect of government funding appears much weaker for those who are arts patrons or who have previously contributed to the arts. Interestingly, this crowding-out effect seems insensitive to the amount of government funding and to labeling the government funding as coming from a prestigious source (e.g., National Endowment for the Arts [NEA]).
In: American review of public administration: ARPA, Band 44, Heft 3, S. 324-338
ISSN: 1552-3357
Performance management is widely assumed to be an effective strategy for improving outcomes in the public sector. However, few attempts have been made to empirically test this assumption. Using data on New York City public schools, we examine the relationship between performance management practices by school leaders and educational outcomes, as measured by standardized test scores. The empirical results show that schools that do a better job at performance management indeed have better outcomes in terms of both the level and gain in standardized test scores, even when controlling for student, staffing, and school characteristics. Thus, our findings provide some rare empirical support for the key assumption behind the performance management movement in public administration.
In: The American review of public administration: ARPA, Band 42, Heft 1, S. 54-65
ISSN: 1552-3357
In: Nonprofit and voluntary sector quarterly, Band 41, Heft 6
ISSN: 0899-7640
In: American review of public administration: ARPA, Band 42, Heft 1, S. 54-66
ISSN: 0275-0740
In: Nonprofit and voluntary sector quarterly: journal of the Association for Research on Nonprofit Organizations and Voluntary Action, Band 41, Heft 6, S. 1014-1028
ISSN: 1552-7395
This study aims to compare two widely used methods of original data collection in nonprofit research: web and mail surveys. We employ an experimental design to assign a web-based survey and a mail survey to nonprofit professionals working in human services organizations in New Jersey. We then compare responses generated from the two survey methods in terms of response rates and data quality. Our study finds that the mail survey achieved a significantly higher response rate than the web survey, and data obtained from the mail survey produced higher internal consistency than that obtained from the web survey. There was no difference between methods, however, in respondent characteristics, the completeness of the survey, and the percentage of missing items. Taken together, the findings suggest that a mail survey, although more costly, may have response-rate and data-quality advantages over a web survey as a methodology for gathering data from nonprofit organizations.
In: American review of public administration: ARPA, Band 42, Heft 1, S. 54-65
ISSN: 1552-3357
The public administration literature has consistently questioned the validity of satisfaction surveys as a measure of government performance, particularly in comparison with more objective official measures. The authors examine this objective-subjective debate using unique data from a large survey distributed to nearly 1 million parents of children in the New York City public schools along with officially reported measures of school performance for about 900 schools. Their results suggest that the official measures of school performance are significant and important predictors of aggregate parental satisfaction, even after controlling for school and student characteristics. They conclude that public school parents form their satisfaction judgment in ways that correspond fairly closely with officially measured school performance. The results can also be interpreted as suggesting that the official performance measures reflect, at least in part, aspects of public schooling that matter to parents.
In: Public administration: an international journal, Band 88, Heft 2, S. 551-563
ISSN: 1467-9299
In: Public Administration, Forthcoming
SSRN
In: Public administration: an international journal, Band 85, Heft 1, S. 215-226
ISSN: 1467-9299
This paper introduces the method of importance‐performance analysis of citizen surveys, a useful approach to understanding citizen satisfaction with local government services. Using data from a US national online panel, we directly compare two approaches to importance‐performance analysis: one employing an explicitly stated measure of importance, the other using a measure of importance derived from regression analysis. The different results that the two approaches give suggest that local government administrators and policy analysts arrive at distinctly different conclusions depending on which importance measure they use. These differences are illustrated by simulating the change in citizen satisfaction that would result from improvement in the top‐rated services according to each measure. Research and policy implications are discussed.