Gradient Boosting Hyperparameters Tuning : Classifier Example

BEST GRADIENT BOOSTING HYPERPARAMETERS

There are various machine learning algorithms that at the last make a weak model. You think to apply other algorithms and still, you get the weak model. If I say there is a method to make all the weak models into a strong model, then do you believe it? At first, you will not believe it, but After reading the entire post you will definitely learn the method to convert the weak model to a strong model using boosting. You will know to tune the Gradient Boosting Hyperparameters.

What is Boosting?

Boosting is an ensemble method to aggregate all the weak models to make them better and the strong model. It’s obvious that rather than random guessing, a weak model is far better. In boosting, algorithms first, divide the dataset into sub-dataset and then predict the score or classify the things. Then it again divides the remaining misclassified datasets into sub-data and so on. Unlike in the random forest, it learns from its mistakes in each iteration. It means that in a random forest, all the trees are independent, but in the case of boosting each successive model learns from the mistakes from the ones before it.

Pros and Cons of Gradient Boosting

There are many advantages and disadvantages of using Gradient Boosting and I have defined some of them below.

Pros

  1. It is an extremely powerful machine learning classifier.
  2. Accepts various types of inputs that make it more flexible.
  3. It can be used for both regression and classification.
  4. It gives you features important for the output.

Cons

  1. It takes a longer time to train as it can’t be parallelized.
  2. More likely to overfit as it is obsessed with the wrong output as it learns from past mistakes.
  3. In some cases, Tuning is very hard as it has many parameters to tune.

When you should use Boosting?

When the data has both the continuous and categorical target. It can be used in any type of problem, simple or complex.
Training is sequential in boosting, but the prediction is parallel. Therefore it is best if you want fast predictions after the model is deployed.

Best Hyperparameters for the Boosting Algorithms

Step1: Import the necessary libraries

import numpy as np
import pandas as pd
import sklearn

Step 2: Import the dataset

train_features = pd.read_csv("train_features.csv")
train_label = pd.read_csv("train_label.csv")

Dataset is the Same as in the Support Vector Machines.

Step 3: Import the boosting algorithm

Let’s import the boosting algorithm from the scikit-learn package

from sklearn.ensemble import GradientBoostingClassifier, GradientBoostingRegressor
print(GradientBoostingClassifier())
print(GradientBoostingRegressor())

print the parameters for the Gradient Boosting

Step 4: Choose the best Hyperparameters

It’s a bit confusing to choose the best hyperparameters for boosting. But once you know how the boosting algorithms work, then you are able to choose it. Here are the best ones that I have chosen, learning_rate, max_depth, and the n_estimators. The max_depth and n_estimators are also the same parameters we chose in a random forest. Here we are taking an extra that is the learning_rate.

Step 5: Call the Boosting classifier constructor and define the parameters.

Here you will make the list of all possibilities for each of the Hyperparameters.

gbc = GradientBoostingClassifier()
parameters = {
    "n_estimators":[5,50,250,500],
    "max_depth":[1,3,5,7,9],
    "learning_rate":[0.01,0.1,1,10,100]
}

Step 6: Use the GridSearhCV() for the cross-validation

You will pass the Boosting classifier, parameters and the number of cross-validation iterations inside the GridSearchCV() method. I am using an iteration of 5. Then fit the GridSearchCV() on the X_train variables and the X_train labels.

from sklearn.model_selection import GridSearchCV
cv = GridSearchCV(gbc,parameters,cv=5)
cv.fit(train_features,train_label.values.ravel())

fitting of the GridSearchCV to the Gradient Boosting

Step 7: Print out the best Parameters.

You can find the best parameters for the boosting algorithms using the cv.best _params_. But I want to show you the parameters and scores for each iteration using the following custom-defined function.

def display(results):
    print(f'Best parameters are: {results.best_params_}')
    print("\n")
    mean_score = results.cv_results_['mean_test_score']
    std_score = results.cv_results_['std_test_score']
    params = results.cv_results_['params']
    for mean,std,params in zip(mean_score,std_score,params):
        print(f'{round(mean,3)} + or -{round(std,3)} for the {params}')
display(cv)

Iteration of the GridSearchCV insid the Boosting

You are clearly seeing the best parameters are:

{'learning_rate': 0.1, 'max_depth': 3, 'n_estimators': 250}

Use these parameters while building your model using Boosting Algorithm.

hyper parameter grid search
hyper parameter grid search

Conclusion

If you see the results then you will notice that Boosting Algorithm has the best scores as compared to the random forest classifier. In fact, Using the GridSearchCV() method you can easily find the best Gradient Boosting Hyperparameters for your machine learning algorithm. If you don’t find that the GridSearchCV()  is improving the score then you should consider adding more data.

If you want to know more in detail about how Gradient Boosting works, then you can refer to Gradient Boosting Wikipedia Page.

Other Queries

In this section, you will know all the queries asked by the data science reader.

Q: What is the max_depth hyperparameter in gradient boosting?

It is the maximum depth of the individual regression estimators. It allows you to limit the total number of nodes in a tree. You can tune it to find the best results and its best value depends upon the interaction between the input variables.

Source:

sklearn.ensemble.GradientBoostingClassifier

Join our list

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

Thank you for signup. A Confirmation Email has been sent to your Email Address.

Something went wrong.

Meet Sukesh ( Chief Editor ), a passionate and skilled Python programmer with a deep fascination for data science, NumPy, and Pandas. His journey in the world of coding began as a curious explorer and has evolved into a seasoned data enthusiast.
 
Thank you For sharing.We appreciate your support. Don't Forget to LIKE and FOLLOW our SITE to keep UPDATED with Data Science Learner