Skip navigation
Skip navigation

A stochastic quasi-Newton method for online convex optimization

Schraudolph, Nicol; Yu, Jin; Guenter, Simon

Description

We develop stochastic variants of the wellknown BFGS quasi-Newton optimization method, in both full and memory-limited (LBFGS) forms, for online optimization of convex functions. The resulting algorithm performs comparably to a well-tuned natural gradient descent but is scalable to very high-dimensional problems. On standard benchmarks in natural language processing, it asymptotically outperforms previous stochastic gradient methods for parameter estimation in conditional random fields. We are...[Show more]

dc.contributor.authorSchraudolph, Nicol
dc.contributor.authorYu, Jin
dc.contributor.authorGuenter, Simon
dc.coverage.spatialSan Juan Puerto Rico
dc.date.accessioned2015-12-10T21:54:15Z
dc.date.createdMarch 21-24 2007
dc.identifier.isbn0972735828
dc.identifier.urihttp://hdl.handle.net/1885/38855
dc.description.abstractWe develop stochastic variants of the wellknown BFGS quasi-Newton optimization method, in both full and memory-limited (LBFGS) forms, for online optimization of convex functions. The resulting algorithm performs comparably to a well-tuned natural gradient descent but is scalable to very high-dimensional problems. On standard benchmarks in natural language processing, it asymptotically outperforms previous stochastic gradient methods for parameter estimation in conditional random fields. We are working on analyzing the convergence of online (L)BFGS, and extending it to nonconvex optimization problems.
dc.publisherOmniPress
dc.relation.ispartofseriesInternational Conference on Artificial Intelligence and Statistics (AISTATS 2007)
dc.sourceProceedings of The 11th International Conference on Artificial Intelligence and Statistics (AISTATS 2007)
dc.source.urihttp://www.stat.umn.edu/~aistat/proceedings/start.htm
dc.subjectKeywords: Conditional random field; Convex functions; High-dimensional problems; Natural gradient; NAtural language processing; Nonconvex optimization problem; Online optimization; Quasi-Newton methods; Quasi-Newton optimization method; Stochastic gradient methods;
dc.titleA stochastic quasi-Newton method for online convex optimization
dc.typeConference paper
local.description.notesImported from ARIES
local.description.refereedYes
dc.date.issued2007
local.identifier.absfor080199 - Artificial Intelligence and Image Processing not elsewhere classified
local.identifier.absfor080201 - Analysis of Algorithms and Complexity
local.identifier.ariespublicationu8803936xPUB167
local.type.statusPublished Version
local.contributor.affiliationSchraudolph, Nicol, College of Engineering and Computer Science, ANU
local.contributor.affiliationYu, Jin, College of Engineering and Computer Science, ANU
local.contributor.affiliationGuenter, Simon , College of Engineering and Computer Science, ANU
local.description.embargo2037-12-31
local.bibliographicCitation.startpage436
local.bibliographicCitation.lastpage443
dc.date.updated2016-02-24T11:43:31Z
local.identifier.scopusID2-s2.0-84862300219
CollectionsANU Research Publications

Download

File Description SizeFormat Image
01_Schraudolph_A_stochastic_quasi-Newton_2007.pdf567.67 kBAdobe PDFThumbnail


Items in Open Research are protected by copyright, with all rights reserved, unless otherwise indicated.

Updated:  19 May 2020/ Responsible Officer:  University Librarian/ Page Contact:  Library Systems & Web Coordinator