MapReduce/Bigtable for Distributed Optimization
author: Slav Petrov,
Research at Google, Google, Inc.
published: Jan. 13, 2011, recorded: December 2010, views: 6648
published: Jan. 13, 2011, recorded: December 2010, views: 6648
Slides
Related content
Report a problem or upload files
If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Description
For large data it can be very time consuming to run gradient based optimization, for example to minimize the log-likelihood for maximum entropy models. Distributed methods are therefore appealing and a number of distributed gradient optimization strategies have been proposed including: distributed gradient, asynchronous updates, and iterative parameter mixtures. In this paper, we evaluate these various strategies with regards to their accuracy and speed over MapReduce/Bigtable and discuss the techniques needed for high performance.
Link this page
Would you like to put a link to this lecture on your homepage?Go ahead! Copy the HTML snippet !
Reviews and comments:
good job please speak in hindi i can't understand.
plz attach me a video of google big table on this id.
thanks for ur workshop.
Write your own review or comment: