A stochastic quasi-Newton method for online convex optimization
Date
2007
Authors
Schraudolph, Nicol
Yu, Jin
Guenter, Simon
Journal Title
Journal ISSN
Volume Title
Publisher
OmniPress
Abstract
We develop stochastic variants of the wellknown BFGS quasi-Newton optimization method, in both full and memory-limited (LBFGS) forms, for online optimization of convex functions. The resulting algorithm performs comparably to a well-tuned natural gradient descent but is scalable to very high-dimensional problems. On standard benchmarks in natural language processing, it asymptotically outperforms previous stochastic gradient methods for parameter estimation in conditional random fields. We are working on analyzing the convergence of online (L)BFGS, and extending it to nonconvex optimization problems.
Description
Keywords
Keywords: Conditional random field; Convex functions; High-dimensional problems; Natural gradient; NAtural language processing; Nonconvex optimization problem; Online optimization; Quasi-Newton methods; Quasi-Newton optimization method; Stochastic gradient methods;
Citation
Collections
Source
Proceedings of The 11th International Conference on Artificial Intelligence and Statistics (AISTATS 2007)
Type
Conference paper
Book Title
Entity type
Access Statement
License Rights
DOI
Restricted until
2037-12-31