Finito: A faster, permutable incremental gradient method for big data problems
Recent advances in optimization theory have shown that smooth strongly convex finite sums can be minimized faster than by treating them as a black box "batch" problem. In this work we introduce a new method in this class with a theoretical convergence rate four times faster than ex-isting methods, for sums with sufficiently many terms. This method is also amendable to a sampling without replacement scheme that in practice gives further speed-ups. We give empirical results showing state of the...[Show more]
|Collections||ANU Research Publications|
|Source:||31st International Conference on Machine Learning, ICML 2014|
|01_Defazio_Finito:_A_faster,_permutable_2014.pdf||2.09 MB||Adobe PDF||Request a copy|
Items in Open Research are protected by copyright, with all rights reserved, unless otherwise indicated.