Gradient Boosted Decision Trees on Hadoop
published: Jan. 13, 2011, recorded: December 2010, views: 24111
Slides
Related content
Report a problem or upload files
If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Description
Stochastic Gradient Boosted Decision Trees (GBDT) is one of the most widely used learning algorithms in machine learning today. It is adaptable, easy to interpret, and produces highly accurate models. However, most implementations today are computationally expensive and require all training data to be in main memory. As training data becomes ever larger, there is motivation for us to parallelize the GBDT algorithm. Parallelizing decision tree training is intuitive and various approaches have been explored in existing literature. Stochastic boosting on the other hand is inherently a sequential process and have not been applied to distributed decision trees. In this paper, we describe a distributed implementation of GBDT that utilizes MPI on the Hadoop grid environment as presented by us at CIKM in 2009.
Link this page
Would you like to put a link to this lecture on your homepage?Go ahead! Copy the HTML snippet !
Reviews and comments:
Horrible job of explaining the methodology. Sounds like a lot of spewing out jargin and absolutely no attention to detail on explaining the build. If you want to learn how to explain methodology here is how it's done -- https://www.youtube.com/watch?v=sRktK...
Write your own review or comment: