Boosting is another approach to ensemble learning. There are many methods for boosting, but one of the most successful and popular methods that people use for ensemble learning has been the AdaBoost algorithm. It is also called adaptive boosting. The core idea behind this algorithm is that, instead of fitting many individual predictors individually, we fit a sequence of weak learners. The next algorithm depends on the result of the previous one. In the AdaBoost algorithm, every iteration reweights all of these samples. The training data here reweights based on the result of the previous individual learners or individual models.
For example, in classification, the basic idea is that the examples that are misclassified gain weight and the examples that are classified correctly lose weight. So, the next learner in the sequence or the next model in the sequence focuses more on misclassified examples.