Log In
Or create an account -> 
Imperial Library
  • Home
  • About
  • News
  • Upload
  • Forum
  • Help
  • Login/SignUp

Index
Cover Page Title Page Copyright Page Dedication Page Contents Preface Prologue: A machine Learning Sampler 1 - The Ingredients of Machine Learning
1.1 Tasks: The Problems that can be Solved with Machine Learning
Looking for Structure Evaluating Performance on a Task
1.2 Models: The Output of Machine Learning
Geometric Models Probabilistic Models Logical Models Grouping and Grading
1.3 Features: The Workhorses of Machine Learning
Two Uses of Features Feature Construction and Transformation Interaction Between Features
1.4 Summary and outlook
What you’ll find in the Rest of the Book
2 - Binary Classification and Related Tasks
2.1 Classification
Assessing Classification Performance Visualising Classification Performance
2.2 Scoring and Ranking
Assessing and Visualising Ranking Performance Turning Rankers into Classifiers
2.3 Class Probability Estimation
Assessing Class Probability Estimates Turning Rankers into Class Probability Estimators
2.4 Binary Classification and Related tasks: Summary and Further Reading
3 - Beyond Binary Classification
3.1 Handling more than two Classes
Multi-class Classification Multi-class Scores and Probabilities
3.2 Regression 3.3 Unsupervised and Descriptive Learning
Predictive and Descriptive Clustering Other Descriptive Models
3.4 Beyond Binary Classification: Summary and Further Reading
4 - Concept Learning
4.1 The Hypothesis Space
Least General Generalisation Internal Disjunction
4.2 Paths Through the Hypothesis Space
Most General Consistent Hypotheses Closed Concepts
4.3 Beyond Conjunctive Concepts
Using First-order Logic
4.4 Learnability 4.5 Concept Learning: Summary and Further Reading
5 - Tree Models
5.1 Decision Trees 5.2 Ranking and Probability Estimation trees
Sensitivity to Skewed Class Distributions
5.3 Tree Learning as Variance Reduction
Regression Trees Clustering Trees
5.4 Tree models: Summary and Further Reading
6 - Rule Models
6.1 Learning Ordered Rule Lists
Rule lists for Ranking and Probability Estimation
6.2 Learning Unordered Rule Sets
Rule Sets for Ranking and Probability Estimation A Closer Look at Rule Overlap
6.3 Descriptive Rule Learning
Rule Learning for Subgroup Discovery Association Rule Mining
6.4 First-orderrule Learning 6.5 Rule Models: Summary and Further Reading
7 - Linear Models
7.1 The Least-squares Method
Multivariate Linear Regression Regularised Regression Using Least-squares Regression for Classification
7.2 The Perceptron 7.3 Support Vector Machines
Soft Margin Svm
7.4 Obtaining Probabilities from Linear Classifiers 7.5 Going Beyond Linearity with Kernel Methods 7.6 Linear Models: Summary and Further Reading
8 - Distance-based Models
8.1 So many Roads 8.2 Neighbours and Exemplars 8.3 Nearest-neighbour Classification 8.4 Distance-based Clustering
K-means Algorithm Clustering Around Medoids Silhouettes
8.5 Hierarchical Clustering 8.6 From Kernels to Distances 8.7 Distance-based Models: Summary and Further Reading
9 - Probabilistic Models
9.1 The Normal Distribution and its Geometric Interpretations 9.2 Probabilistic Models for Categorical Data
Using a Naive Bayes Model for Classification Training a Naive Bayes model
9.3 Discriminative Learning by Optimising Conditional Likelihood 9.4 Probabilistic Models with Hidden Variables
Expectation-Maximisation Gaussian Mixture Models
9.5 Compression-based Models 9.6 Probabilistic Models: Summary and Further Reading
10 - Features
10.1 Kinds of Feature
Calculations on Features Categorical, Ordinal and Quantitative Features Structured Features
10.2 Feature Transformations
Thresholding and Discretisation Normalisation and Calibration Incomplete Features
10.3 Feature Construction and Selection
Matrix Transformations and Decompositions
10.4 Features: Summary and Further Reading
11 - Model Ensembles
11.1 Bagging and Random Forests 11.2 Boosting
Boosted Rule Learning
11.3 Mapping the Ensemble Landscape
Bias, variance and Margins Other Ensemble Methods Meta-learning
11.4 Model Ensembles: Summary and Further Reading
12 - Machine Learning experiments
12.1 What to Measure 12.2 How to Measure it 12.3 How to Interpret it
Interpretation of Results Over Multiple Data Sets
12.4 Machine Learning Experiments: Summary and Further Reading
Epilogue: Where to go from here Important Points to Remember References Index
  • ← Prev
  • Back
  • Next →
  • ← Prev
  • Back
  • Next →

Chief Librarian: Las Zenow <zenow@riseup.net>
Fork the source code from gitlab
.

This is a mirror of the Tor onion service:
http://kx5thpx2olielkihfyo4jgjqfb7zx7wxr3sd4xzt26ochei4m6f7tayd.onion