skip to content
 

Automating stochastic gradient methods with adaptive batch sizes

Presented by: 
Tom Goldstein University of Maryland
Date: 
Wednesday 6th September 2017 - 09:50 to 10:40
Venue: 
INI Seminar Room 1
Abstract: 
This talk will address several issues related to training neural networks using stochastic gradient methods.  First, we'll talk about the difficulties of training in a distributed environment, and present a new method called centralVR for boosting the scalability of training methods.  Then, we'll talk about the issue of automating stochastic gradient descent, and show that learning rate selection can be simplified using "Big Batch" strategies that adaptively choose minibatch sizes.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.
University of Cambridge Research Councils UK
    Clay Mathematics Institute London Mathematical Society NM Rothschild and Sons