Sharper lower bounds on the performance of the empirical risk minimization algorithm
Date
2010
Authors
Lecue, G
Mendelson, Shahar
Journal Title
Journal ISSN
Volume Title
Publisher
Chapman & Hall
Abstract
We present an argument based on the multidimensional and the uniform central limit theorems, proving that, under some geometrical assumptions between the target function T and the learning class F, the excess risk of the empirical risk minimization algorithm is lower bounded by Esup q∈Q Gq/δ,/n where (Gq)q∈Q is a canonical Gaussian process associated with Q (a well chosen subset of F) and δ is a parameter governing the oscillations of the empirical excess risk function over a small ball in F.
Description
Keywords
Keywords: Empirical risk minimization; Learning theory; Lower bound; Multidimensional central limit theorem; Uniform central limit theorem
Citation
Collections
Source
Bernoulli
Type
Journal article
Book Title
Entity type
Access Statement
License Rights
Restricted until
2037-12-31
Downloads
File
Description