WHATNATURE AND ORIGINSLEAVES OUT
In: Critical review: a journal of politics and society, Band 24, Heft 4, S. 569-642
ISSN: 1933-8007
114967 Ergebnisse
Sortierung:
In: Critical review: a journal of politics and society, Band 24, Heft 4, S. 569-642
ISSN: 1933-8007
In: Journal of leisure research: JLR, Band 44, Heft 2, S. 234-256
ISSN: 2159-6417
In: Dissent: a quarterly of politics and culture, Band 67, Heft 2, S. 12-16
ISSN: 1946-0910
In: Children & young people now, Band 2014, Heft 25, S. 24-26
ISSN: 2515-7582
One in four young homeless people are lesbian, gay, bisexual or transgender according to estimates. Emily Rogers visits a housing project addressing their specific needs and explores the lessons for other services
In: NACLA Report on the Americas, Band 47, Heft 4, S. 30-32
ISSN: 2471-2620
In: Critical review: an interdisciplinary journal of politics and society, Band 24, Heft 4, S. 569-642
ISSN: 0891-3811
In: Entwicklung und Zusammenarbeit: E + Z, Band 41, Heft 7
ISSN: 0721-2178
In: Neurotransmitter, Band 31, Heft 4, S. 24-26
ISSN: 2196-6397
In: NBER Working Paper No. w26244
SSRN
SSRN
In: Communications in statistics. Simulation and computation, Band 24, Heft 1, S. 1-16
ISSN: 1532-4141
In: Communications in statistics. Simulation and computation, Band 16, Heft 1, S. 263-297
ISSN: 1532-4141
In: Decision sciences, Band 26, Heft 6, S. 803-818
ISSN: 1540-5915
ABSTRACTThe widespread use of regression analysis as a business forecasting tool and renewed interest in the use of cross‐validation to aid in regression model selection make it essential that decision makers fully understand methods of cross‐validation in forecasting, along with the advantages and limitations of such analysis. Only by fully understanding the process can managers accurately interpret the important implications of statistical cross‐validation results in their determination of the robustness of regression forecasting models. Through a multiple regression analysis of a large insurance company's customer database, the Herzberg equation for determining the criterion of validity [11] and analysis of samples of different size from the two regions covered by the database, we illustrate the use of statistical cross‐validation and test a set of factors hypothesized to be related to the statistical accuracy of validation. We find that increasing sample size will increase reliability. When the magnitude of population model differences is small, validation results are found to be unreliable, and increasing sample size has little or no effect on reliability. In addition, the relative fit of the model for the derivative sample and the validation sample has an impact on validation accuracy, and should be used as an indicator of when further analysis should be undertaken. Furthermore, we find that the probability distribution of the population independent variables has no effect on validation accuracy.
In: Statistica Neerlandica: journal of the Netherlands Society for Statistics and Operations Research, Band 78, Heft 4, S. 743-758
ISSN: 1467-9574
AbstractIn density estimation, the mean integrated squared error (MISE) is commonly used as a measure of performance. In that setting, the cross‐validation criterion provides an unbiased estimator of the MISE minus the integral of the squared density. Since the minimum MISE is known to converge to zero, this suggests that the minimum value of the cross‐validation criterion could be regarded as an estimator of minus the integrated squared density. This novel proposal presents the outstanding feature that, unlike all other existing estimators, it does not need the choice of any tuning parameter. Indeed, it is proved here that this approach results in a consistent and efficient estimator, with remarkable performance in practice. Moreover, apart from this base case, it is shown how several other problems on density functional estimation can be similarly handled using this new principle, thus demonstrating full potential for further applications.
In: Central European journal of operations research
ISSN: 1613-9178
AbstractAlthough cross-validation (CV) is a standard technique in machine learning and data science, its efficacy remains largely unexplored in ranking environments. When evaluating the significance of differences, cross-validation is typically coupled with statistical testing, such as the Dietterich, Alpaydin, or Wilcoxon test. In this paper, we evaluate the power and false positive error rate of the Dietterich, Alpaydin, and Wilcoxon statistical tests combined with cross-validation each operating with folds ranging from 5 to 10, resulting in a total of 18 variants. Our testing setup utilizes a ranking framework, similar to the Sum of Ranking Differences (SRD) statistical procedure: we assume the existence of a reference ranking, and distances are measured in $$L_1$$
L
1
-norm. We test the methods under artificial scenarios as well as on real data borrowed from sports and chemistry. The choice of the optimal CV test method depends on preferences related to the minimization of errors in type I and II cases, the size of the input, and anticipated patterns in the data. Among the investigated input sizes, the Wilcoxon method with eight folds proved to be the most effective, although its performance in type I situations is subpar. While the Dietterich and Alpaydin methods excel in type I situations, they perform poorly in type II scenarios. The inadequate performances of these tests raises questions about their efficacy outside of ranking environments too.