In this chapter, we learned about cross-validation, and different methods of cross-validation, including holdout cross-validation and k-fold cross-validation. We came to know that k-fold cross-validation is nothing but doing holdout cross-validation many times. We implemented k-fold cross-validation using the diamond dataset. We also compared different models using k-fold cross-validation and found the best-performing model, which was the random forest model.
Then, we discussed hyperparameter tuning. We came across the exhaustive grid-search method, which is used to perform hyperparameter tuning. We implemented hyperparameter tuning again using the diamond dataset. We also compared tuned and untuned models, and found that tuned parameters make the model perform better than untuned ones.
In the next chapter, we will study feature selection methods, dimensionality reduction and principle component analysis (PCA), and feature engineering. We will also learn about a method to improve the model with feature engineering.