Learning without Concentration

dc.contributor.authorMendelson, Shahar
dc.date.accessioned2015-12-08T22:11:13Z
dc.date.available2015-12-08T22:11:13Z
dc.date.issued2014
dc.date.updated2016-06-14T09:13:25Z
dc.description.abstractWe obtain sharp bounds on the convergence rate of Empirical Risk Minimization performed in a convex class and with respect to the squared loss, without any boundedness assumptions on class members or on the target. Rather than resorting to a concentration-based argument, the method relies on a ‘small-ball’ assumption and thus holds for heavy-tailed sampling and heavy-tailed targets. Moreover, the resulting estimates scale correctly with the ‘noise level’ of the problem. When applied to the classical, bounded scenario, the method always improves the known estimates.
dc.identifier.issn1938-7228
dc.identifier.urihttp://hdl.handle.net/1885/29703
dc.publisherJournal of Machine Learning Research (Online)
dc.sourceJMLR: Workshop and Conference Proceedings
dc.titleLearning without Concentration
dc.typeJournal article
local.bibliographicCitation.lastpage15
local.bibliographicCitation.startpage1
local.contributor.affiliationMendelson, Shahar, College of Physical and Mathematical Sciences, ANU
local.contributor.authoruidMendelson, Shahar, u4011413
local.description.notesImported from ARIES
local.description.refereedYes
local.identifier.absfor010404 - Probability Theory
local.identifier.absseo970101 - Expanding Knowledge in the Mathematical Sciences
local.identifier.ariespublicationu5328909xPUB67
local.identifier.citationvolume35
local.identifier.scopusID2-s2.0-84939623358
local.type.statusPublished Version

Downloads