We show how to scale these algorithms to high-dimensional problems. We consider two acceleration schemes, Nesterov and Anderson, and discuss their implementation. We develop algorithms for computing an element-weighted low-rank matrix approximation (SVD) via projected gradient descent. This comes up in case-control sampling, but also in other situations such as transfer learning. You build a classifier on some training data, but you would like to deploy it in a population where the class distribution (prior) is different. Added a new blog entry on Altered Priors.We use the OOB information to estimate jacknife standard Intervals for the Generalisation Error of Random Forests. Samyak Rajnala, Stephen Bates, Trevor Hastie and Rob Tibshirani.National Science Foundation and the National Institutes of Health. The research reported here was partially supported by grants from the Statistical Learning with Sparsity: the Lasso and Generalizationsīy Trevor Hastie, Robert Tibshirani and Martin Wainwright (May 2015)ĭata Mining, Inference, and Prediction (Second Edition)īy Trevor Hastie, Robert Tibshirani and Jerome Friedman (2009)Īn Introduction to Statistical Learning with Applications in Rīy Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani (June 2013)īy Trevor Hastie, Robert Tibshirani and Jerome Friedman (2001)Įdited by John Chambers and Trevor Hastie (1991)īy Trevor Hastie and Robert Tibshirani (1990) An Introduction to Statistical Learning with Applications in R (second edition)īy Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani (August 2021)ģ new chapters (+179 pages), including Deep LearningĬomputer Age Statistical Inference:Algorithms, Evidence and Data Scienceīy Bradley Efron and Trevor Hastie (August 2016)