Random Forest May Not Need An Explicit Validation Set For Evaluation

A guide to out-of-bag validation.

We all know that ML models should not be evaluated on the training data. Thus, we should always keep a held-out validation/test set for evaluation.

But random forests are an exception to that.

In other words, you can reliably evaluate a random forest using the training set itself.

Confused?

Let me explain.

To recap, a random forest is trained as follows:

  • First, create different subsets of data with replacement.

  • Next, train one decision tree per subset.

  • Finally, aggregate all predictions to get the final prediction.

Clearly, EVERY decision tree has some unseen data points in the entire training set.

Thus, we can use them to validate that specific decision tree.

This is also called out-of-bag validation.

Calculating the out-of-bag score for the whole random forest is simple too.

For every data point in the entire training set:

  • Gather predictions from all decision trees that used it as an out-of-bag sample

  • Aggregate predictions to get the final prediction

Finally, score all the predictions to get the out-of-bag score.

Out-of-bag validation has several benefits:

  • If you have less data, you can prevent data splitting

  • It's computationally faster than using, say, cross-validation

  • It ensures that there is no data leakage, etc.

Luckily, out-of-bag validation is neatly tied in sklearnโ€™s random forest implementation too.

๐Ÿ‘‰ Over to you:

  1. What are some limitations of out-of-bag validation?

  2. How reliable is the out-of-bag score to tune the hyperparameters of the random forest model?

๐Ÿ‘‰ Read what others are saying about this post on LinkedIn and Twitter.

๐Ÿ‘‰ Tell the world what makes this newsletter special for you by leaving a review here :)

๐Ÿ‘‰ If you liked this post, donโ€™t forget to leave a like โค๏ธ. It helps more people discover this newsletter on Substack and tells me that you appreciate reading these daily insights. The button is located towards the bottom of this email.

๐Ÿ‘‰ If you love reading this newsletter, feel free to share it with friends!

๐Ÿ‘‰ Sponsor the Daily Dose of Data Science Newsletter. More info here: Sponsorship details.

Find the code for my tips here: GitHub.

I like to explore, experiment and write about data science concepts and tools. You can read my articles on Medium. Also, you can connect with me on LinkedIn and Twitter.

Reply

or to participate.