Gradient Boosting Flashcards

1
Q

What is gradient boosting trees? ‍⭐️

A

Gradient boosting is a machine learning technique for regression and classification problems, which produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What’s the difference between random forest and gradient boosting? ‍⭐️

A

Random Forests builds each tree independently while Gradient Boosting builds one tree at a time.
Random Forests combine results at the end of the process (by averaging or “majority rules”) while Gradient Boosting combines results along the way.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Is it possible to parallelize training of a gradient boosting model? How to do it? ‍⭐️

A

Yes, different frameworks provide different options to make training faster, using GPUs to speed up the process by making it highly parallelizable.For example, for XGBoost tree_method = ‘gpu_hist’ option makes training faster by use of GPUs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the main parameters in the gradient boosting model? ‍⭐️

A

There are many parameters, but below are a few key defaults.

learning_rate=0.1 (shrinkage).
n_estimators=100 (number of trees).
max_depth=3.
min_samples_split=2.
min_samples_leaf=1.
subsample=1.0.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How do you approach tuning parameters in XGBoost or LightGBM? 🚀

A

Depending upon the dataset, parameter tuning can be done manually or using hyperparameter optimization frameworks such as optuna and hyperopt. In manual parameter tuning, we need to be aware of max-depth, min_samples_leaf and min_samples_split so that our model does not overfit the data but try to predict generalized characteristics of data (basically keeping variance and bias low for our model).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How do you select the number of trees in the gradient boosting model? ‍⭐️

A

Most implementations of gradient boosting are configured by default with a relatively small number of trees, such as hundreds or thousands. Using scikit-learn we can perform a grid search of the n_estimators model parameter

How well did you know this?
1
Not at all
2
3
4
5
Perfectly