Log In
Or create an account -> 
Imperial Library
  • Home
  • About
  • News
  • Upload
  • Forum
  • Help
  • Login/SignUp

Index
Cover Page Title Page Contents Preface Part I. First Steps
Chapter 1. Let’s Discuss Learning
1.1 Welcome 1.2 Scope, Terminology, Prediction, and Data 1.3 Putting the Machine in Machine Learning 1.4 Examples of Learning Systems 1.5 Evaluating Learning Systems 1.6 A Process for Building Learning Systems 1.7 Assumptions and Reality of Learning 1.8 End-of-Chapter Material
Chapter 2. Some Technical Background
2.1 About our Setup 2.2 The Need for Mathematical Language 2.3 Our Software for Tackling Machine Learning 2.4 Probability 2.5 Linear Combinations, Weighted Sums, and Dot-Products 2.6 Drawing né Geometry 2.7 Notation and the Plus-One Trick 2.8 Getting Groovy, Breaking the Staight-jacket, and Non-linearity 2.9 NumPy Versus “All the Maths” 2.10 Floating Point Issues 2.11 EOC
Chapter 3. Predicting Categories: Getting Started with Classification
3.1 Classification Tasks 3.2 A Simple Classification Dataset 3.3 Training and Testing: Don’t Teach to the Test 3.4 Evaluation: Grading the Exam 3.5 Simple Classifier #1: Nearest Neighbors, Long Distance Relationships, and Assumptions 3.6 Simple Classifier #2: Naive Bayes, Probability, and Broken Promises 3.7 Simplistic Evaluation of Classifiers 3.8 EOC
Chapter 4. Predicting Numerical Values: Getting Started with Regression
4.1 A Simple Regression Dataset 4.2 Nearest Neighbors Regression and Summary Statistics 4.3 Linear Regression and Errors 4.4 Optimization: Picking the Best Answer 4.5 Simple Evaluation and Comparison of Regressors 4.6 EOC
Part II. Evaluation
Chapter 5. Evaluating and Comparing Learners
5.1 Evaluation and Why Less is More 5.2 Terminology for Learning Phases 5.3 Major Tom, There’s Something Wrong: Overfitting and Underfitting 5.4 From Errors to Costs 5.5 (Re-) Sampling: Making More from Less 5.6 Break-it-Down: Deconstructing Error into Bias and Variance 5.7 Graphical Evaluation and Comparison 5.8 Comparing Learners with Cross-Validation 5.9 EOC
Chapter 6. Evaluating Classifiers
6.1 Baseline Classifiers 6.2 Beyond Accuracy: Metrics for Classification 6.3 ROC Curves 6.4 Another Take on Multiclass: One-Versus-One 6.5 Precision-Recall Curves 6.6 Cumulative Response and Lift Curves 6.7 More Sophisticated Evaluation of Classifiers: Take Two 6.8 EOC
Chapter 7. Evaluating Regressors
7.1 Baseline Regressors 7.2 Additional Measures for Regression 7.3 Residual Plots 7.4 A First Look at Standardization 7.5 Evaluating Regressors in a more Sophisticated Way: Take Two 7.6 EOC
Part III. More Methods and Fundamentals
Chapter 8. More Classification Methods
8.1 Revisiting Classification 8.2 Decision Trees 8.3 Support Vector Classifiers 8.4 Logistic Regression 8.5 Discriminant Analysis 8.6 Assumptions, Biases, and Classifiers 8.7 Comparison of Classifiers: Take Three 8.8 EOC
Chapter 9. More Regression Methods
9.1 Linear Regression in the Penalty Box: Regularization 9.2 Support Vector Regression 9.3 Piecewise Constant Regression 9.4 Regression Trees 9.5 Comparison of Regressors: Take Three 9.6 EOC
Chapter 10. Manual Feature Engineering: Manipulating Data for Fun and Profit
10.1 Feature Engineering Terminology and Motivation 10.2 Feature Selection and Data Reduction: Taking out the Trash 10.3 Feature Scaling 10.4 Discretization 10.5 Categorical Coding 10.6 Relationships and Interactions 10.7 Target Manipulations 10.8 EOC
Chapter 11. Tuning Hyper-Parameters and Pipelines
11.1 Models, Parameters, Hyperparameters 11.2 Tuning Hyper-Parameters 11.3 Down the Recursive Rabbit Hole: Nested Cross-Validation 11.4 Pipelines 11.5 Pipelines and Tuning Together 11.6 EOC
Part IV. Adding Complexity
Chapter 12. Combining Learners
12.1 Ensembles 12.2 Voting Ensembles 12.3 Bagging and Random Forests 12.4 Boosting 12.5 Comparing the Tree-Ensemble Methods 12.6 EOC
13 Models that Engineer Features For Us
13.1 Feature Selection 13.2 Feature Construction with Kernels 13.3 Principal Components Analysis: An Unsupervised Technique 13.4 EOC
14 Feature Engineering for Domains: Domain Specific Learning
14.1 Working with Text 14.2 Clustering 14.3 Working with Images 14.4 EOC
15 Connections, Extensions, and Further Directions
15.1 Optimization 15.2 Linear Regression from Raw Materials 15.3 Building Logistic Regression from Raw Materials 15.4 SVM from Raw Materials 15.5 Neural Networks 15.6 Probabilistic Graphical Models 15.7 EOC
Appendix A. mlwpy.py Listing
  • ← Prev
  • Back
  • Next →
  • ← Prev
  • Back
  • Next →

Chief Librarian: Las Zenow <zenow@riseup.net>
Fork the source code from gitlab
.

This is a mirror of the Tor onion service:
http://kx5thpx2olielkihfyo4jgjqfb7zx7wxr3sd4xzt26ochei4m6f7tayd.onion