A stochastic quasi-Newton method for online convex optimization
We develop stochastic variants of the wellknown BFGS quasi-Newton optimization method, in both full and memory-limited (LBFGS) forms, for online optimization of convex functions. The resulting algorithm performs comparably to a well-tuned natural gradient descent but is scalable to very high-dimensional problems. On standard benchmarks in natural language processing, it asymptotically outperforms previous stochastic gradient methods for parameter estimation in conditional random fields. We are...[Show more]
|Collections||ANU Research Publications|
|Source:||Proceedings of The 11th International Conference on Artificial Intelligence and Statistics (AISTATS 2007)|
|01_Schraudolph_A_stochastic_quasi-Newton_2007.pdf||567.67 kB||Adobe PDF||Request a copy|
Items in Open Research are protected by copyright, with all rights reserved, unless otherwise indicated.