In k-fold cross-validation, we basically do holdout cross-validation many times. So in k-fold cross-validation, we partition the dataset into k equal-sized samples. Of these many k subsamples, a single subsample is retained as the validation data for testing the model, and the remaining k−1 subsamples are used as training data. This cross-validation process is then repeated k times, with each of the k subsamples used exactly once as the validation data. The k results can then be averaged to produce a single estimation.
The following screenshot shows a visual example of 5-fold cross-validation (k=5) :

Here, we see that our dataset gets divided into five parts. We use the first part for testing and the rest for training.
The following are the steps we follow in the 5-fold cross-validation method:
- We get the first estimation of our evaluation metrics.
- We use the second part for testing and the rest for training, and we use that to get a second estimation of our evaluation metrics.
- We use the third part for testing and the rest for training, and so on. In this way, we get five estimations of the evaluation metrics.
In k-fold cross-validation, after the k estimations of the evaluation matrix have been observed, an average of them is taken. This will give us a better estimation of the performance of the model. So, instead of having just one estimation of this evaluation metric, we can get n number of estimations with k-fold cross-validation, and then we can take the average and get a better estimation for the performance of the model.
As seen here, the advantage of the k-fold cross-validation method is that it can be used not only for model evaluation but also for hyperparameter tuning.
The following are the variants of k-fold cross-validation:
- Repeated cross-validation: In repeated cross-validation, we perform k-fold cross-validation many times. So, if we want 30 estimations of our evaluation metrics, we can do 5-fold cross-validation six times. So then we will get 30 estimations of our evaluation metrics.
- Leave-One-Out (LOO) cross-validation: In this method, we take the whole dataset for training except for one point. We use that one point for evaluation and then we repeat this process for every data point in our dataset.