Random forests are a modification of bagged decision trees that build a large collection of de-correlated trees to further improve predictive performance. They have become a very popular “out-of-the-box” or “off-the-shelf” learning algorithm that enjoys good predictive performance with relatively little hyperparameter tuning. Many modern implementations of random forests exist; however, Leo Breiman’s algorithm (Breiman, 2001) has largely become the authoritative procedure. This chapter will cover the fundamentals of random forests.
This chapter leverages the following packages. Some of these packages play a supporting role; however, the emphasis is on how to implement random forests with the ranger (Wright and Ziegler, 2017) and h2o packages.
# Helper packages
library(dplyr) # for data wrangling
library(ggplot2) # for awesome graphics
# Modeling packages
library(ranger) # a c++ implementation of random forest
library(h2o) # a java-based implementation of random forest
We’ll continue working with the ames_train
data set created in Section 2.7 to illustrate the main concepts.
Random forests are built using the same fundamental principles as decision trees (Chapter 9) and bagging (Chapter 10). Bagging trees introduces a random component into the tree building process by building many trees on bootstrapped copies of the training data. Bagging then aggregates the predictions across all the trees; this aggregation reduces the variance of the overall procedure and results in improved predictive performance. However, as we saw in Section 10.6, simply bagging trees results in tree correlation that limits the effect of variance reduction.
Random forests help to reduce tree correlation by injecting more randomness into the tree-growing process.1 More specifically, while growing a decision tree during the bagging process, random forests perform split-variable randomization where each time a split is to be performed, the search for the split variable is limited to a random subset of mtry of the original p features. Typical default values are (regression) and (classification) but this should be considered a tuning parameter.
The basic algorithm for a regression or classification random forest can be generalized as follows:
1. Given a training data set
2. Select number of trees to build (n_trees)
3. for i = 1 to n_trees do
4. | Generate a bootstrap sample of the original data
5. | Grow a regression/classification tree to the bootstrapped data
6. | for each split do
7. | | Select m_try variables at random from all p variables
8. | | Pick the best variable/split-point among the m_try
9. | | Split the node into two child nodes
10. | end
11. | Use typical tree model stopping criteria to determine when a | tree is complete (but do not prune)
12. end
13. Output ensemble of trees
When mtry = p, the algorithm is equivalent to bagging decision trees.
Since the algorithm randomly selects a bootstrap sample to train on and a random sample of features to use at each split, a more diverse set of trees is produced which tends to lessen tree correlation beyond bagged trees and often dramatically increase predictive power.
Random forests have become popular because they tend to provide very good out-of-the-box performance. Although they have several hyperparameters that can be tuned, the default values tend to produce good results. Moreover, Probst et al. (2018) illustrated that among the more popular machine learning algorithms, random forests have the least variability in their prediction accuracy when tuning.
For example, if we train a random forest model2 with all hyperparameters set to their default values, we get an OOB RMSE that is better than any model we’ve run thus far (without any tuning).
By default, ranger sets the mtry
parameter to ; however, for regression problems the preferred mtry
to start with is features . We also set respect.unordered.factors = ”order”
. This specifies how to treat unordered factor variables and we recommend setting this to “order” (see Friedman et al. (2001) Section 9.2.4 for details).
# number of features
n_features <- length(setdiff(names(ames_train), ”Sale_Price”))
# train a default random forest model
ames_rf1 <- ranger(
Sale_Price ~ .,
data = ames_train,
mtry = floor(n_features / 3),
respect.unordered.factors = ”order”,
seed = 123
)
# get OOB RMSE
(default_rmse <- sqrt(ames_rf1$prediction.error))
## [1] 24859
Although random forests perform well out-of-the-box, there are several tunable hyperparameters that we should consider when training a model. Although we briefly discuss the main hyperparameters, Probst et al. (2019) provide a much more thorough discussion. The main hyperparameters to consider include:
(1) The number of trees in the forest
(2) The number of features to consider at any given split: mtry
(3) The complexity of each tree
(4) The sampling scheme
(5) The splitting rule to use during tree construction
(6) and (2) typically have the largest impact on predictive accuracy and should always be tuned. (3) and (4) tend to have marginal impact on predictive accuracy but are still worth exploring. They also have the ability to influence computational efficiency. (5) tends to have the smallest impact on predictive accuracy and is used primarily to increase computational efficiency.
The first consideration is the number of trees within your random forest. Although not technically a hyperparameter, the number of trees needs to be sufficiently large to stabilize the error rate. A good rule of thumb is to start with 10 times the number of features as illustrated in Figure 11.1; however, as you adjust other hyperparameters such as mtry and node size, more or fewer trees may be required. More trees provide more robust and stable error estimates and variable importance measures; however, the impact on computation time increases linearly with the number of trees.
Start with p × 10trees and adjust as necessary 26000
The hyperparameter that controls the split-variable randomization feature of random forests is often referred to as mtry and it helps to balance low tree correlation with reasonable predictive strength. With regression problems the default value is often and for classification . However, when there are fewer relevant predictors (e.g., noisy data) a higher value of mtry tends to perform better because it makes it more likely to select those features with the strongest signal. When there are many relevant predictors, a lower mtry might perform better.
Start with five evenly spaced values of mtry across the range 2−p centered at the recommended default as illustrated in Figure 11.2.
Random forests are built on individual decision trees; consequently, most random forest implementations have one or more hyperparameters that allow us to control the depth and complexity of the individual trees. This will often include hyperparameters such as node size, max depth, max number of terminal nodes, or the required node size to allow additional splits. Node size is probably the most common hyperparameter to control tree complexity and most implementations use the default values of one for classification and five for regression as these values tend to produce good results (Díaz-Uriarte and De Andres, 2006; Goldstein et al., 2011). However, Segal (2004) showed that if your data has many noisy predictors and higher mtry values are performing best, then performance may improve by increasing node size (i.e., decreasing tree depth and complexity). Moreover, if computation time is a concern then you can often decrease run time substantially by increasing the node size and have only marginal impacts to your error estimate as illustrated in Figure 11.3.
When adjusting node size start with three values between 1–10 and adjust depending on impact to accuracy and run time.
The default sampling scheme for random forests is bootstrapping where 100% of the observations are sampled with replacement (in other words, each bootstrap copy has the same size as the original training data); however, we can adjust both the sample size and whether to sample with or without replacement. The sample size parameter determines how many observations are drawn for the training of each tree. Decreasing the sample size leads to more diverse trees and thereby lower between-tree correlation, which can have a positive effect on the prediction accuracy. Consequently, if there are a few dominating features in your data set, reducing the sample size can also help to minimize between-tree correlation.
Also, when you have many categorical features with a varying number of levels, sampling with replacement can lead to biased variable split selection (Janitza et al., 2016; Strobl et al., 2007). Consequently, if you have categories that are not balanced, sampling without replacement provides a less biased use of all levels across the trees in the random forest.
Assess 3–4 values of sample sizes ranging from 25%–100% and if you have unbalanced categorical features try sampling without replacement.
Recall the default splitting rule during random forests tree building consists of selecting, out of all splits of the (randomly selected mtry) candidate variables, the split that minimizes the Gini impurity (in the case of classification) and the SSE (in case of regression). However, Strobl et al. (2007) illustrated that these default splitting rules favor the selection of features with many possible splits (e.g., continuous variables or categorical variables with many categories) over variables with fewer splits (the extreme case being binary variables, which have only one possible split). Conditional inference trees (Hothorn et al., 2006) implement an alternative splitting mechanism that helps to reduce this variable selection bias.3 However, ensembling conditional inference trees has yet to be proven superior with regards to predictive accuracy and they take a lot longer to train.
To increase computational efficiency, splitting rules can be randomized where only a random subset of possible splitting values is considered for a variable (Geurts et al., 2006). If only a single random splitting value is randomly selected then we call this procedure extremely randomized trees. Due to the added randomness of split points, this method tends to have no improvement, or often a negative impact, on predictive accuracy.
Regarding runtime, extremely randomized trees are the fastest as the cutpoints are drawn completely randomly, followed by the classical random forest, while for conditional inference forests the runtime is the largest (Probst et al., 2019).
If you need to increase computation time significantly try completely randomized trees; however, be sure to assess predictive accuracy to traditional split rules as this approach often has a negative impact on your loss function.
As we introduce more complex algorithms with greater number of hyperparameters, we should become more strategic with our tuning strategies. One way to become more strategic is to consider how we proceed through our grid search. Up to this point, all our grid searches have been full Cartesian grid searches where we assess every combination of hyperparameters of interest. We could continue to do the same; for example, the next code block searches across 120 combinations of hyperparameter settings.
This grid search takes approximately 2 minutes.
# create hyperparameter grid
hyper_grid <- expand.grid(
mtry = floor(n_features * c(.05, .15, .25, .333, .4)),
min.node.size = c(1, 3, 5, 10),
replace = c(TRUE, FALSE),
sample.fraction = c(.5, .63, .8),
rmse = NA
)
# execute full cartesian grid search
for(i in seq_len(nrow(hyper_grid))) {
# fit model for ith hyperparameter combination
fit <- ranger(
formula = Sale_Price ~ .,
data = ames_train,
num.trees = n_features * 10,
mtry = hyper_grid$mtry[i],
min.node.size = hyper_grid$min.node.size[i],
replace = hyper_grid$replace[i],
sample.fraction = hyper_grid$sample.fraction[i],
verbose = FALSE,
seed = 123,
respect.unordered.factors = ’order’,
)
# export OOB error
hyper_grid$rmse[i] <- sqrt(fit$prediction.error)
}
# assess top 10 models
hyper_grid %>%
arrange(rmse) %>%
mutate(perc_gain = (default_rmse - rmse) / default_rmse * 100) %>%
head(10)
## mtry min.node.size replace sample.fraction rmse
## 1 32 1 FALSE 0.8 23975
## 2 32 3 FALSE 0.8 24023
## 3 32 5 FALSE 0.8 24033
## 4 26 3 FALSE 0.8 24104
## 5 20 1 FALSE 0.8 24132
## 6 26 5 FALSE 0.8 24144
## 7 20 3 FALSE 0.8 24195
## 8 26 1 FALSE 0.8 24216
## 9 32 10 FALSE 0.8 24224
## 10 20 5 FALSE 0.8 24249
## perc_gain
## 1 3.56
## 2 3.36
## 3 3.33
## 4 3.04
## 5 2.92
## 6 2.88
## 7 2.67
## 8 2.59
## 9 2.55
## 10 2.45
If we look at the results we see that the top 10 models are all near or below an RMSE of 24000 (a 2.5%–3.5% improvement over our baseline model). In these results, the default mtry
value of is nearly sufficient and smaller node sizes (deeper trees) perform best. What stands out the most is that taking less than 100% sample rate and sampling without replacement consistently performs best. Sampling less than 100% adds additional randomness in the procedure, which helps to further de-correlate the trees. Sampling without replacement likely improves performance because this data has a lot of high cardinality categorical features that are imbalanced.
However, as we add more hyperparameters and values to search across and as our data sets become larger, you can see how a full Cartesian search can become exhaustive and computationally expensive. In addition to full Cartesian search, the h2o package provides a random grid search that allows you to jump from one random combination to another and it also provides early stopping rules that allow you to stop the grid search once a certain condition is met (e.g., a certain number of models have been trained, a certain runtime has elapsed, or the accuracy has stopped improving by a certain amount). Although using a random discrete search path will likely not find the optimal model, it typically does a good job of finding a very good model.
To fit a random forest model with h2o, we first need to initiate our h2o session.
h2o.no_progress()
h2o.init(max_mem_size = ”5g”)
Next, we need to convert our training and test data sets to objects that h2o can work with.
# convert training data to h2o object
train_h2o <- as.h2o(ames_train)
# set the response column to Sale_Price
response <- ”Sale_Price”
# set the predictor names
predictors <- setdiff(colnames(ames_train), response)
The following fits a default random forest model with h2o to illustrate that our baseline results (OOB RMSE = 24655) are very similar to the baseline ranger model we fit earlier.
h2o_rf1 <- h2o.randomForest(
x = predictors,
y = response,
training_frame = train_h2o,
ntrees = n_features * 10,
seed = 123
)
h2o_rf1
## Model Details:
## ==============
##
## H2ORegressionModel: drf
## Model ID: DRF_model_R_1561581756490_1
## Model Summary:
## number_of_trees number_of_internal_trees model_size_in_bytes …
## 1 800 800 12401089 …
##
##
## H2ORegressionMetrics: drf
## ** Reported on training data. **
## ** Metrics reported on Out-Of-Bag training samples **
##
## MSE: 607854463
## RMSE: 24654.7
## MAE: 15078.35
## RMSLE: 0.1371323
## Mean Residual Deviance : 607854463
To execute a grid search in h2o we need our hyperparameter grid to be a list. For example, the following code searches a larger grid space than before with a total of 240 hyperparameter combinations. We then create a random grid search strategy that will stop if none of the last 10 models have managed to have a 0.1% improvement in MSE compared to the best model before that. If we continue to find improvements then we cut the grid search off after 300 seconds (5 minutes).
# hyperparameter grid
hyper_grid <- list(
mtries = floor(n_features * c(.05, .15, .25, .333, .4)),
min_rows = c(1, 3, 5, 10),
max_depth = c(10, 20, 30),
sample_rate = c(.55, .632, .70, .80)
)
# random grid search strategy
search_criteria <- list(
strategy = ”RandomDiscrete”,
stopping_metric = ”mse”,
stopping_tolerance = 0.001, # stop if improvement is < 0.1%
stopping_rounds = 10, # over the last 10 models
max_runtime_secs = 60*5 # or stop search after 5 min.
)
We can then perform the grid search with h2o.grid()
. The following executes the grid search with early stopping turned on. The early stopping we specify below in h2o.grid()
will stop growing an individual random forest model if we have not experienced at least a 0.05% improvement in the overall OOB error in the last 10 trees. This is very useful as we can specify to build 1000 trees for each random forest model but h2o may only build 200 trees if we don’t experience any improvement.
This grid search takes 5 minutes.
# perform grid search
random_grid <- h2o.grid(
algorithm = ”randomForest”,
grid_id = ”rf_random_grid”,
x = predictors,
y = response,
training_frame = train_h2o,
hyper_params = hyper_grid,
ntrees = n_features * 10,
seed = 123,
stopping_metric = ”RMSE”,
stopping_rounds = 10, # stop if last 10 trees added
stopping_tolerance = 0.005, # don’t improve RMSE by 0.5%
search_criteria = search_criteria
)
Our grid search assessed 129 models before stopping due to time. The best model (max_depth = 20
, min_rows = 1
, mtries = 20
, and sample_rate = 0.8
) achieved an OOB RMSE of 24091. So although our random search assessed about 30% of the number of models as a full grid search would, the more efficient random search found a near-optimal model within the specified time constraint.
In fact, we re-ran the same grid search but allowed for a full search across all 240 hyperparameter combinations and the best model achieved an OOB RMSE of 23785.
# collect the results and sort by our model performance metric
# of choice
random_grid_perf <- h2o.getGrid(
grid_id = ”rf_random_grid”,
sort_by = ”mse”,
decreasing = FALSE
)
random_grid_perf
## H2O Grid Details
## ================
##
## Grid ID: rf_random_grid
## Used hyper parameters:
## - max_depth
## - min_rows
## - mtries
## - sample_rate
## Number of models: 76
## Number of failed models: 0
##
## Hyper-Parameter Search Summary: ordered by increasing mse
## max_depth min_rows mtries sample_rate … mse
## 1 20 1.0 20 0.8 … 5.803817602118055E8
## 2 20 1.0 32 0.8 … 5.979963774377276E8
## 3 20 1.0 26 0.632 … 6.078544626686643E8
## 4 30 1.0 32 0.632 … 6.175727047678405E8
## 5 30 1.0 26 0.55 … 6.206862379537665E8
##
## ---
## max_depth min_rows mtries sample_rate … mse
## 71 10 1.0 4 0.55 … 9.322703689301581E8
## 72 20 5.0 4 0.7 … 9.331406598281059E8
## 73 20 5.0 4 0.55 … 1.0138375847024254E9
## 74 20 10.0 4 0.632 … 1.0978382048184035E9
## 75 30 10.0 4 0.55 … 1.1014816203603299E9
## 76 20 5.0 32 0.8 … 1.5919839519623117E9
Computing feature importance and feature effects for random forests follow the same procedure as discussed in Section 10.5. However, in addition to the impurity-based measure of feature importance where we base feature importance on the average total reduction of the loss function for a given feature across all trees, random forests also typically include a permutation-based importance measure. In the permutation-based approach, for each tree, the OOB sample is passed down the tree and the prediction accuracy is recorded. Then the values for each variable (one at a time) are randomly permuted and the accuracy is again computed. The decrease in accuracy as a result of this randomly shuffling of feature values is averaged over all the trees for each predictor. The variables with the largest average decrease in accuracy are considered most important.
For example, we can compute both measures of feature importance with ranger by setting the importance
argument.
For ranger, once you’ve identified the optimal parameter values from the grid search, you will want to re-run your model with these hyperparameter values. You can also crank up the number of trees, which will help create more stables values of variable importance.
# re-run model with impurity-based variable importance
rf_impurity <- ranger(
formula = Sale_Price ~ .,
data = ames_train,
num.trees = 2000,
mtry = 32,
min.node.size = 1,
sample.fraction = .80,
replace = FALSE,
importance = ”impurity”,
respect.unordered.factors = ”order”,
verbose = FALSE,
seed = 123
)
# re-run model with permutation-based variable importance
rf_permutation <- ranger(
formula = Sale_Price ~ .,
data = ames_train,
num.trees = 2000,
mtry = 32,
min.node.size = 1,
sample.fraction = .80,
replace = FALSE,
importance = ”permutation”,
respect.unordered.factors = ”order”,
verbose = FALSE,
seed = 123
)
The resulting VIPs are displayed in Figure 11.5. Typically, you will not see the same variable importance order between the two options; however, you will often see similar variables at the top of the plots (and also the bottom). Consequently, in this example, we can comfortably state that there appears to be enough evidence to suggest that three variables stand out as most influential:
• Overall_Qual
• Gr_Liv_Area
• Neighborhood
Looking at the next ~10 variables in both plots, you will also see some commonality in influential variables (e.g., Garage_Cars, Exter_Qual, Bsmt_Qual, and Year_Built
).
p1 <- vip::vip(rf_impurity, num_features = 25, bar = FALSE)
p2 <- vip::vip(rf_permutation, num_features = 25, bar = FALSE)
gridExtra::grid.arrange(p1, p2, nrow = 1)
Random forests provide a very powerful out-of-the-box algorithm that often has great predictive accuracy. They come with all the benefits of decision trees (with the exception of surrogate splits) and bagging but greatly reduce instability and between-tree correlation. And due to the added split variable selection attribute, random forests are also faster than bagging as they have a smaller feature search space at each tree split. However, random forests will still suffer from slow computational speed as your data sets get larger but, similar to bagging, the algorithm is built upon independent steps, and most modern implementations (e.g., ranger, h2o) allow for parallelization to improve training time.
1See Friedman et al. (2001) for a mathematical explanation of the tree correlation phenomenon.
2Here we use the ranger package to fit a baseline random forest. It is common for folks to first learn to implement random forests by using the original randomForest package (Liaw and Wiener, 2002). Although randomForest is a great package with many bells and whistles, ranger provides a much faster C++ implementation of the same algorithm.
3Conditional inference trees are available in the partykit (Hothorn and Zeileis, 2015) and ranger packages among others.