Difference between bagging and boosting

Home  >  Articles  >  Difference between bagging and boosting

Difference between bagging and boosting

Shakshi Sehrawat

Updated on 21st June, 2024 , 3 min read

Difference between bagging and boosting 


As we all know, ensembling the learning helps improve machine learning results by combining the several types of models. This approach permits the production of greater predictive performance compared to the single model. The primary idea is to learn the sets of the classifiers (experts) and to enable them to vote. Bagging and Boosting are the two different types of Ensembling Learning. These two decrease the variance of a single estimate as they merge several estimates from different models. So the results may be a model with increased stability. Let’s understand these 2 terms in a glimpse.

 

Bagging: It is a homogeneous weak learner model that learns from everyone independently in parallel and mixes them for determining the model average.

Boosting: It is a homogeneous weak learner model but works in various ways from Bagging. In this model, the learners learn consecutively and adaptively to improve model predictions of a learning algorithm.

Looking at both of them in detail and understanding the Difference between Bagging and Boosting.

Bagging

Bootstrap Aggregating, also called as bagging, it is a machine learning ensembling meta-algorithm designed to improve the stability and accuracy of machine learning algorithms hand me down in statistical classification and regression. It lowers the variance and helps to avoid overfitting. It is usually applied to the decision tree methods. Bagging is a special case of the model average approach. 

Description of the Technique

Suppose a set D of d tuples, at each iteration i, a training set Di of d tuples is picked via row sampling with a replacement method that is: there can be repetitive elements from various d tuples from D (that is bootstrap). Then a classifier model Mi is learned for every training set D < i. Each classifier Mi backs up its class prediction. The bagged classifier M* counts the votes and gives the class with the most votes to X (unknown sample).

Implementation Steps of Bagging

First step is to select  Multiple subsets that are created from the original data set with identical tuples, selecting observations with replacement.

Second step is to select a  base model that is created on each of these subsets.

Third step is each model that is learned in parallel with each training set and independent of each other.

Last step is the final predictions are determined by combining the predictions midst  all the models.

Boosting

Boosting is an ensemble modeling technique that attempts to build a larger classifier from the no. of weak classifiers. It is done by making a model by using weak models in the series. First of all, a model is built from the training data. Then the next model is built , which tries to rectify  the errors present in the first model. The procedure is carried on and the models are added until any of  them complete the training data set and is predicted accurately or the enhancing number of models is added.

Boosting Algorithms

There are various boosting algorithms. The original boosting algorithms, proposed by Robert Schapire and Yoav Freund were not flexible and could not be fully compatible with the weak learners. Schapire and Freund later developed AdaBoost, an flexible boosting algorithm.

Similarities Between Bagging and Boosting

Bagging and Boosting, both being the frequently used methods, have a universal similarity of being classified as ensembling methods. A list of similarities are listed below:

  1. Both are ensemble methods to receive N learners from 1 learner.
  2. Both make several training data sets by random sampling.
  3. Both help in making the final decision by averaging the N learners (or taking the most of them that is Majority Voting).
  4. Both of them are good at reducing the variance and help in providing higher stability.

Differences Between Bagging and Boosting

The differences are written in the table below:

Bagging Boosting
Bagging aims to reduce the variances, not bias.Boosting aims to reduce the bias, not variances.
Each of the models are equally weighed in bagging.In boosting the models are weighted in respect to their performance.
Each models are build independent of each other.In boosting new models are influenced by the performance of the earlier model.
Bagging also helps in trying to solve the overfitting problem.Boosting helps in reducing the bias.
The simplest way of bagging helps in combining predictions that are owned by the same type.The simplest way of boosting helps in combining predictions that are owned by the dissimilar type.
In bagging the base classifiers are parallelly trained.In boosting the base classifiers are sequentially trained .
The classic example of bagging is the random forest model.The example for boosting technique is the ADABoost.

What is Stacking?

Stacking (Stacked Generalization) is an ensemble learning technique which aims to combine multiple models to improve predictive performance.

It also involves the following steps:

Base Models: Training multiple models (level-0 models) are on the same dataset.

Meta-Model: Training a new model (level-1 or meta-model) helps in combining the predictions of the base models. Using these predictions of the base models as the input features for the meta-model.

Check Eligibility   Free 1:1 Counselling