In: Political analysis: PA ; the official journal of the Society for Political Methodology and the Political Methodology Section of the American Political Science Association, Band 27, Heft 1, S. 101-106
AbstractCan constitutional court decisions shape public opinion on a governmental policy? Previous studies have focused on the US Supreme Court, which enjoys a high degree of public support as the major resource of power for courts. In this study, we examine the extent to which courts can influence public opinion regarding a government bill at European courts. First, we argue that the public support for courts also allows them to move public opinion on policies into the direction of their decisions. This works in both directions: they can confer legitimacy to a policy that they support, but they can also de‐legitimize a policy that they oppose. Second, we argue that this mechanism strongly depends on the amount of support that a court receives. It only has an effect for courts that possess a higher institutional legitimacy and among the group of citizens trusting a court.We test our arguments by combining a most different systems design for France and Germany with a survey priming experiment on a school security bill. France and Germany are selected for a most different systems design as they exhibit different institutional designs as well as different levels of support for the court at the aggregate level. The survey experiment is implemented within large national election surveys, the German Internet Panel and the French National Election Study. Both experiments contain more than 2,600 respondents each. Our survey experiment primes for decision outcomes and different institutions to understand whether there are differences between an institution supporting and opposing a policy and between a court and alternative institutions.Our findings confirm that with higher public support, courts can move the opinion of citizens to both legitimize and de‐legitimize a policy. This effect can be found at the aggregate level for a court enjoying higher public support, but also at the individual level for respondents with higher trust in the court. Interestingly, courts can even move the opinion of citizens with strong prior attitudes in the opposite direction, if these citizens highly trust the court.These findings have implications beyond the study itself. First, they confirm that the legitimacy‐conferring effect can also be observed for European courts, not only for the US Supreme Court. Second, they show that the relevance of a mechanism identified for a single case, like the US Supreme Court, might only hold for specific conditions. As public support for courts strongly varies across countries in Europe, we also expect the impact of any mechanism relying on public support to strongly vary, as we can observe in our own analysis.
In: Political analysis: PA ; the official journal of the Society for Political Methodology and the Political Methodology Section of the American Political Science Association, Band 27, Heft 2, S. 255-262
We offer a dynamic Bayesian forecasting model for multiparty elections. It combines data from published pre-election public opinion polls with information from fundamentals-based forecasting models. The model takes care of the multiparty nature of the setting and allows making statements about the probability of other quantities of interest, such as the probability of a plurality of votes for a party or the majority for certain coalitions in parliament. We present results from twoex anteforecasts of elections that took place in 2017 and are able to show that the model outperforms fundamentals-based forecasting models in terms of accuracy and the calibration of uncertainty. Provided that historical and current polling data are available, the model can be applied to any multiparty setting.
This study explores how researchers' analytical choices affect the reliability of scientific findings. Most discussions of reliability problems in science focus on systematic biases. We broaden the lens to emphasize the idiosyncrasy of conscious and unconscious decisions that researchers make during data analysis. We coordinated 161 researchers in 73 research teams and observed their research decisions as they used the same data to independently test the same prominent social science hypothesis: that greater immigration reduces support for social policies among the public. In this typical case of social science research, research teams reported both widely diverging numerical findings and substantive conclusions despite identical start conditions. Researchers' expertise, prior beliefs, and expectations barely predict the wide variation in research outcomes. More than 95% of the total variance in numerical results remains unexplained even after qualitative coding of all identifiable decisions in each team's workflow. This reveals a universe of uncertainty that remains hidden when considering a single study in isolation. The idiosyncratic nature of how researchers' results and conclusions varied is a previously underappreciated explanation for why many scientific hypotheses remain contested. These results call for greater epistemic humility and clarity in reporting scientific findings.