The Three Prongs of a Jurisprudential Regimes Test: A Response to Kritzer and Richards
In: The journal of politics: JOP, Band 72, Heft 2, S. 289-291
ISSN: 1468-2508
7 Ergebnisse
Sortierung:
In: The journal of politics: JOP, Band 72, Heft 2, S. 289-291
ISSN: 1468-2508
In: The journal of politics: JOP, Band 72, Heft 2, S. 273-284
ISSN: 1468-2508
In: The journal of politics: JOP, Band 72, Heft 2, S. 289-291
ISSN: 0022-3816
In: The journal of politics: JOP, Band 72, Heft 2
ISSN: 0022-3816
The founding debate of judicial politics is Supreme Court decision making driven by law or politics? remains at center stage. One influential line of attack involves the identification of jurisprudential regimes, stable patterns of case decisions based on the influence of case factors. The key test is whether the regime changes after a major precedent-setting decision, that is, whether the case factors are subsequently treated differently by the Supreme Court justices themselves so that they vote as though constrained by precedent. We analyze whether binding jurisprudential regime change actually exists. The standard test assumes votes are independent observations, even though they are clustered by case and by term. We argue that a (nonparametric) 'randomization test' is more appropriate. We find little evidence that precedents affect voting. Adapted from the source document.
In: Political analysis: PA ; the official journal of the Society for Political Methodology and the Political Methodology Section of the American Political Science Association, Band 22, Heft 4, S. 457-463
ISSN: 1476-4989
International relations scholars frequently rely on data sets with country pairs, or dyads, as the unit of analysis. Dyadic data, with its thousands and sometimes hundreds of thousands of observations, may seem ideal for hypothesis testing. However, dyadic observations are not independent events. Failure to account for this dependence in the data dramatically understates the size of standard errors and overstates the power of hypothesis tests. We illustrate this problem by analyzing a central proposition among IR scholars, the democratic trade hypothesis, which claims that democracies seek out other democracies as trading partners. We employ randomization tests to infer the correctp-values associated with the trade hypotheses. Our results show that typical statistical tests for significance are severely overconfident when applied to dyadic data.
In: State politics & policy quarterly: the official journal of the State Politics and Policy Section of the American Political Science Association, Band 10, Heft 2, S. 180-198
ISSN: 1532-4400
Many hypotheses in U.S. state politics research are multi-level, positing that state-level variables affect individual-level behavior. Unadjusted standard errors for state-level variables are too small, leading to overconfidence and possible false rejection of null hypotheses. Primo, Jacobsmeier, and Milyo (2007) explore this problem in their reanalysis of Wolfinger, Highton, and Mullin's (2005) data on the effects of post-registration laws on voter turnout. Primo et al. advocate the use of clustered standard errors to solve the overconfidence problem, but we offer an alternative solution: randomization tests. Randomization tests are non-parametric tests that do not rely on comparisons to theoretical test statistic distributions. Instead, they use distributions tailored to the data, created by randomly scrambling the data many times to simulate what would be observed under the null hypothesis. Unlike with clustering, with the randomization test, U.S. state-level reforms generally fail to be significant both as additive effects and as interactions with individual characteristics. Adapted from the source document.
In: State politics & policy quarterly: the official journal of the State Politics and Policy section of the American Political Science Association, Band 10, Heft 2, S. 180-198
ISSN: 1946-1607
AbstractMany hypotheses in U.S. state politics research are multi-level, positing that state-level variables affect individual-level behavior. Unadjusted standard errors for state-level variables are too small, leading to overconfidence and possible false rejection of null hypotheses. Primo, Jacobsmeier, and Milyo (2007) explore this problem in their reanalysis of Wolfinger, Highton, and Mullin's (2005) data on the effects of post-registration laws on voter turnout. Primo et al. advocate the use of clustered standard errors to solve the overconfidence problem, but we offer an alternative solution: randomization tests. Randomization tests are non-parametric tests that do not rely on comparisons to theoretical test statistic distributions. Instead, they use distributions tailored to the data, created by randomly scrambling the data many times to simulate what would be observed under the null hypothesis. Unlike with clustering, with the randomization test, U.S. state-level reforms generally fail to be significant both as additive effects and as interactions with individual characteristics.