Log In
Or create an account ->
Imperial Library
Home
About
News
Upload
Forum
Help
Login/SignUp
Index
Preface
What to Expect
Who This Book Is For
Conventions Used in This Book
Using Code Examples
O’Reilly Online Learning
How to Contact Us
Acknowledgments
1. Introduction
Libraries Used
Installation with Pip
Installation with Conda
2. Overview of the Machine Learning Process
3. Classification Walkthrough: Titanic Dataset
Project Layout Suggestion
Imports
Ask a Question
Terms for Data
Gather Data
Clean Data
Create Features
Sample Data
Impute Data
Normalize Data
Refactor
Baseline Model
Various Families
Stacking
Create Model
Evaluate Model
Optimize Model
Confusion Matrix
ROC Curve
Learning Curve
Deploy Model
4. Missing Data
Examining Missing Data
Dropping Missing Data
Imputing Data
Adding Indicator Columns
5. Cleaning Data
Column Names
Replacing Missing Values
6. Exploring
Data Size
Summary Stats
Histogram
Scatter Plot
Joint Plot
Pair Grid
Box and Violin Plots
Comparing Two Ordinal Values
Correlation
RadViz
Parallel Coordinates
7. Preprocess Data
Standardize
Scale to Range
Dummy Variables
Label Encoder
Frequency Encoding
Pulling Categories from Strings
Other Categorical Encoding
Date Feature Engineering
Add col_na Feature
Manual Feature Engineering
8. Feature Selection
Collinear Columns
Lasso Regression
Recursive Feature Elimination
Mutual Information
Principal Component Analysis
Feature Importance
9. Imbalanced Classes
Use a Different Metric
Tree-based Algorithms and Ensembles
Penalize Models
Upsampling Minority
Generate Minority Data
Downsampling Majority
Upsampling Then Downsampling
10. Classification
Logistic Regression
Naive Bayes
Support Vector Machine
K-Nearest Neighbor
Decision Tree
Random Forest
XGBoost
Gradient Boosted with LightGBM
TPOT
11. Model Selection
Validation Curve
Learning Curve
12. Metrics and Classification Evaluation
Confusion Matrix
Metrics
Accuracy
Recall
Precision
F1
Classification Report
ROC
Precision-Recall Curve
Cumulative Gains Plot
Lift Curve
Class Balance
Class Prediction Error
Discrimination Threshold
13. Explaining Models
Regression Coefficients
Feature Importance
LIME
Tree Interpretation
Partial Dependence Plots
Surrogate Models
Shapley
14. Regression
Baseline Model
Linear Regression
SVMs
K-Nearest Neighbor
Decision Tree
Random Forest
XGBoost Regression
LightGBM Regression
15. Metrics and Regression Evaluation
Metrics
Residuals Plot
Heteroscedasticity
Normal Residuals
Prediction Error Plot
16. Explaining Regression Models
Shapley
17. Dimensionality Reduction
PCA
UMAP
t-SNE
PHATE
18. Clustering
K-Means
Agglomerative (Hierarchical) Clustering
Understanding Clusters
19. Pipelines
Classification Pipeline
Regression Pipeline
PCA Pipeline
Index
← Prev
Back
Next →
← Prev
Back
Next →