Nonparametric modelling of biodiversity: Determinants of threatened species
In: Journal of policy modeling: JPMOD ; a social science forum of world issues, Band 33, Heft 4, S. 618-635
ISSN: 0161-8938
48 Ergebnisse
Sortierung:
In: Journal of policy modeling: JPMOD ; a social science forum of world issues, Band 33, Heft 4, S. 618-635
ISSN: 0161-8938
In: Journal of policy modeling: JPMOD ; a social science forum of world issues, Band 33, Heft 4, S. 618-636
ISSN: 0161-8938
In: Journal of econometrics 81.1997,1
In: Annals of econometrics
In: Applied Economics, Band 41, Heft 2, S. 249-267
The neglected issue of using profit efficiency for the best-practice benchmarking of UK universities is explored to see whether this supports the policy stance of encouraging more specialised university production. The paper also investigates whether nonparametric modelling with financial ratios, in contrast to nonparametric modelling based on the prices and quantities of each university's inputs and outputs, can yield ready insights into this profit efficiency issue. The empirical results, using two new approaches, confirm that more specialised university production yields relatively higher performance on average than less specialised production. The results also highlight certain advantages of financial ratios modelling.
SSRN
In: Advanced Studies in Theoretical and Applied Econometrics 15
One: Unit Root and Fractional Integration -- 1. Testing for a Unit Root in the Presence of a Maintained Trend -- 2. Random Walks Versus Fractional Integration: Power Comparisons of Scalar and Joint Tests of the Variance-Time Function -- 3. Testing for a Random Walk: A Simulation Experiment of Power When the Sampling Interval is Varied -- Two: Nonparametric Econometrics -- 4. Estimation of a Probability Density Function with Applications to Nonparametric Inference in Econometrics -- 5. Estimation of the Shape of the Demand Curve by Nonparametric Kernel Methods -- Three: Modelling Demand Systems -- 6. A Class of Dynamic Demand Systems -- 7. A Reinterpretation of the Almost Ideal Demand System -- 8. Stochastic Specification and Maximum-Likelihood Estimation of the Linear Expenditure System -- Four: Modelling Issues -- 9. Selection Bias: More than a Female Phenomenon -- 10. A Comparison of Two Significance Tests for Structural Stability in the Linear Regression Model -- 11. Rates of Return on Physical and R&D Capital and Structure of the Production Process: Cross Section and Time Series Evidence.
In: Bundesbank Series 2 Discussion Paper No. 2009,07
SSRN
In: Discussion paper
In: Ser. 2, Banking and financial studies 2009,07
SSRN
In: Diskussionspapiere des Fachbereichs Wirtschaftswissenschaften, Universität Hannover 319
This paper presents a survey on panel data methods in which I emphasize new developements. In particular, linear multilevel models with a new variant are discussed. Furthermore, non-linear, nonparametric and semiparametric models are analyzed. In contrast to linear models there do not exist unified methods for nonlinear approaches. In this case FEM are dominated by CML methods. Under REM assumptions it is often possible to use the ML method directly. As alternatives GMM and simulated estimators exist. If the nonlinear function is not exactly known, nonparametric or semiparametric methods should be preferred.
Estimation of heteroskedasticity and autocorrelation consistent covariance matrices (HACs) is a well established problem in time series. Results have been established under a variety of weak conditions on temporal dependence and heterogeneity that allow one to conduct inference on a variety of statistics, see Newey and West (1987), Hansen (1992), de Jong and Davidson (2000), and Robinson (2004). Indeed there is an extensive literature on automating these procedures starting with Andrews (1991). Alternative methods for conducting inference include the bootstrap for which there is also now a very active research program in time series especially, see Lahiri (2003) for an overview. One convenient method for time series is the subsampling approach of Politis, Romano, andWolf (1999). This method was used by Linton, Maasoumi, andWhang (2003) (henceforth LMW) in the context of testing for stochastic dominance. This paper is concerned with the practical problem of conducting inference in a vector time series setting when the data is unbalanced or incomplete. In this case, one can work only with the common sample, to which a standard HAC/bootstrap theory applies, but at the expense of throwing away data and perhaps losing effciency. An alternative is to use some sort of imputation method, but this requires additional modelling assumptions, which we would rather avoid.1 We show how the sampling theory changes and how to modify the resampling algorithms to accommodate the problem of missing data. We also discuss effciency and power. Unbalanced data of the type we consider are quite common in financial panel data, see for example Connor and Korajczyk (1993). These data also occur in cross-country studies.
BASE
In: Palgrave texts in econometrics
In: Springer eBook Collection
1 Introduction -- 2 'Classical' Techniques of Modelling Trends and Cycles -- 3 Stochastic Trends and Cycles -- 4 Filtering Economic Time Series -- 5 Nonlinear and Nonparametric Trend and Cycle Modelling -- 6 Multivariate Modelling of Trends and Cycles -- 7 Conclusions.
We revisit a classical method for ecological risk assessment, theSpecies Sensitivity Distribution (SSD) approach, in a Bayesian nonparamet-ric framework. SSD is a mandatory diagnostic required by environmentalregulatory bodies from the European Union, the United States, Australia,China etc. Yet, it is subject to much scientific criticism, notably concerning ahistorically debated parametric assumption for modelling species variability.Tackling the problem using nonparametric mixture models, it is possible toshed this parametric assumption and build a statistically sounder basis forSSD. We use Normalized Random Measures with Independent Increments(NRMI) as the mixing measure because they offer a greater flexibility thanthe Dirichlet process. Indeed, NRMI can induce a prior on the number ofcomponents in the mixture model that is less informative than the Dirich-let process. This feature is consistent with the fact that SSD practitionersdo not usually have a strong prior belief on the number of components. Inthis short paper, we illustrate the advantage of the nonparametric SSD overthe classical normal SSD and a kernel density estimate SSD on several realdatasets. We summarise the results of the complete study in Kon Kam Kinget al. (2016), where the method is generalised to censored data and a system-atic comparison on simulated data is also presented, along with a study ofthe clustering induced by the mixture model to examine patterns in speciessensitivity.
BASE
International audience ; We revisit a classical method for ecological risk assessment, the Species Sensitivity Distribution (SSD) approach, in a Bayesian nonparamet-ric framework. SSD is a mandatory diagnostic required by environmental regulatory bodies from the European Union, the United States, Australia, China etc. Yet, it is subject to much scientific criticism, notably concerning a historically debated parametric assumption for modelling species variability. Tackling the problem using nonparametric mixture models, it is possible to shed this parametric assumption and build a statistically sounder basis for SSD. We use Normalized Random Measures with Independent Increments (NRMI) as the mixing measure because they offer a greater flexibility than the Dirichlet process. Indeed, NRMI can induce a prior on the number of components in the mixture model that is less informative than the Dirichlet process. This feature is consistent with the fact that SSD practitioners do not usually have a strong prior belief on the number of components. In this short paper, we illustrate the advantage of the nonparametric SSD over the classical normal SSD and a kernel density estimate SSD on several real datasets. We summarise the results of the complete study in Kon Kam King et al. (2016), where the method is generalised to censored data and a systematic comparison on simulated data is also presented, along with a study of the clustering induced by the mixture model to examine patterns in species sensitivity.
BASE
International audience ; We revisit a classical method for ecological risk assessment, the Species Sensitivity Distribution (SSD) approach, in a Bayesian nonparamet-ric framework. SSD is a mandatory diagnostic required by environmental regulatory bodies from the European Union, the United States, Australia, China etc. Yet, it is subject to much scientific criticism, notably concerning a historically debated parametric assumption for modelling species variability. Tackling the problem using nonparametric mixture models, it is possible to shed this parametric assumption and build a statistically sounder basis for SSD. We use Normalized Random Measures with Independent Increments (NRMI) as the mixing measure because they offer a greater flexibility than the Dirichlet process. Indeed, NRMI can induce a prior on the number of components in the mixture model that is less informative than the Dirichlet process. This feature is consistent with the fact that SSD practitioners do not usually have a strong prior belief on the number of components. In this short paper, we illustrate the advantage of the nonparametric SSD over the classical normal SSD and a kernel density estimate SSD on several real datasets. We summarise the results of the complete study in Kon Kam King et al. (2016), where the method is generalised to censored data and a systematic comparison on simulated data is also presented, along with a study of the clustering induced by the mixture model to examine patterns in species sensitivity.
BASE