Skip navigation
Skip navigation

Bundle Methods for Regularized Risk Minimization

Teo, Choon-Hui; Vishwanathan, S.V.N.; Smola, Alexander; Quoc, V, Le

Description

A wide variety of machine learning problems can be described as minimizing a regularized risk functional, with different algorithms using different notions of risk and different regularizers. Examples include linear Support Vector Machines (SVMs), Gaussian Processes, Logistic Regression, Conditional Random Fields (CRFs), and Lasso amongst others. This paper describes the theory and implementation of a scalable and modular convex solver which solves all these estimation problems. It can be...[Show more]

dc.contributor.authorTeo, Choon-Hui
dc.contributor.authorVishwanathan, S.V.N.
dc.contributor.authorSmola, Alexander
dc.contributor.authorQuoc, V, Le
dc.date.accessioned2015-12-07T22:21:47Z
dc.identifier.issn1532-4435
dc.identifier.urihttp://hdl.handle.net/1885/20201
dc.description.abstractA wide variety of machine learning problems can be described as minimizing a regularized risk functional, with different algorithms using different notions of risk and different regularizers. Examples include linear Support Vector Machines (SVMs), Gaussian Processes, Logistic Regression, Conditional Random Fields (CRFs), and Lasso amongst others. This paper describes the theory and implementation of a scalable and modular convex solver which solves all these estimation problems. It can be parallelized on a cluster of workstations, allows for data-locality, and can deal with regularizers such as L1 and L2 penalties. In addition to the unified framework we present tight convergence bounds, which show that our algorithm converges in O(1/ε) steps to e precision for general convex problems and in O(log(1/ε)) steps for continuously differentiable problems. We demonstrate the performance of our general purpose solver on a variety of publicly available data sets.
dc.publisherMIT Press
dc.sourceJournal of Machine Learning Research
dc.source.urihttp://jmlr.csail.mit.edu/papers/v11/
dc.subjectKeywords: Bundle methods; Cutting plane method; Cutting plane methods; Parallel optimization; Risk minimization; Subgradient methods; Convergence of numerical methods; Learning algorithms; Support vector machines; Optimization Bundle methods; Cutting plane method; Optimization; Parallel optimization; Regularized risk minimization; Subgradient methods
dc.titleBundle Methods for Regularized Risk Minimization
dc.typeJournal article
local.description.notesImported from ARIES
local.identifier.citationvolume11
dc.date.issued2010
local.identifier.absfor080109 - Pattern Recognition and Data Mining
local.identifier.ariespublicationu4607519xPUB11
local.type.statusPublished Version
local.contributor.affiliationTeo, Choon-Hui, College of Engineering and Computer Science, ANU
local.contributor.affiliationVishwanathan, S.V.N., Purdue University
local.contributor.affiliationSmola, Alexander, Yahoo! Research
local.contributor.affiliationQuoc, V, Le, Stanford University
local.description.embargo2037-12-31
local.bibliographicCitation.startpage311
local.bibliographicCitation.lastpage365
local.identifier.absseo890205 - Information Processing Services (incl. Data Entry and Capture)
dc.date.updated2016-02-24T11:13:48Z
local.identifier.scopusID2-s2.0-76749161402
CollectionsANU Research Publications

Download

File Description SizeFormat Image
01_Teo_Bundle_Methods_for_Regularized_2010.pdf1.79 MBAdobe PDF    Request a copy


Items in Open Research are protected by copyright, with all rights reserved, unless otherwise indicated.

Updated:  17 November 2022/ Responsible Officer:  University Librarian/ Page Contact:  Library Systems & Web Coordinator