Cuvillier Verlag

Publications, Dissertations, Habilitations & Brochures.
International Specialist Publishing House for Science and Economy

Cuvillier Verlag

De En Es
Statistical Issues in Machine Learning Towards Reliable Split Selection and Variable Importance Measures

Hard Copy
EUR 28.00 EUR 26.60

E-book
EUR 19.60

Statistical Issues in Machine Learning Towards Reliable Split Selection and Variable Importance Measures (English shop)

Carolin Strobl (Author)

Preview

Table of Contents, Datei (56 KB)
Extract, Datei (84 KB)

ISBN-13 (Printausgabe) 3867276617
ISBN-13 (Hard Copy) 9783867276610
ISBN-13 (eBook) 9783736926615
Language English
Page Number 204
Lamination of Cover glossy
Edition 1 Aufl.
Volume 0
Publication Place Göttingen
Place of Dissertation München
Publication Date 2008-07-30
General Categorization Dissertation
Departments Mathematics
Informatics
Biochemistry, molecular biology, gene technology
Keywords CART, bagging, random forest, Gini index, variable importance
Description

Recursive partitioning methods from machine learning are being widely applied in many scientific fields such as, e.g., genetics and bioinformatics. The present work is concerned with the two main problems that arise in recursive partitioning, instability and biased variable selection, from a statistical point of view. With respect to the first issue, instability, the entire scope of methods from standard classification trees over robustified classification trees and ensemble methods such as TWIX, bagging and random forests is covered in this work.
While ensemble methods prove to be much more stable than single trees, they also loose most of their interpretability. Therefore an adaptive cutpoint selection scheme is suggested with which a TWIX ensemble reduces to a single tree if the partition is sufficiently stable. With respect to the second issue, variable selection bias, the statistical sources of this artifact in single trees and a new form of bias inherent in ensemble methods based on bootstrap samples are investigated. For single trees, one unbiased split selection criterion is evaluated and another one newly introduced here. Based on the results for single trees and further findings on the effects of bootstrap sampling on association measures, it is shown that, in addition to using an unbiased split selection criterion, subsampling instead of bootstrap sampling should be employed in ensemble methods to be able to reliably compare the variable importance scores of predictor variables of different types. The statistical properties and the null hypothesis of a test for the random forest variable importance are critically investigated. Finally, a new, conditional importance measure is suggested that allows for a fair comparison in the case of correlated predictor variables and better reflects the null hypothesis of interest.