Skip navigation
Skip navigation

Sharper lower bounds on the performance of the empirical risk minimization algorithm

Lecue, G; Mendelson, Shahar


We present an argument based on the multidimensional and the uniform central limit theorems, proving that, under some geometrical assumptions between the target function T and the learning class F, the excess risk of the empirical risk minimization algorithm is lower bounded by Esup q∈Q Gq/δ,/n where (Gq)q∈Q is a canonical Gaussian process associated with Q (a well chosen subset of F) and δ is a parameter governing the oscillations of the empirical excess risk function over a small ball in F.

CollectionsANU Research Publications
Date published: 2010
Type: Journal article
Source: Bernoulli
DOI: 10.3150/09-BEJ225


File Description SizeFormat Image
01_Lecue_Sharper_lower_bounds_on_the_2010.pdf112.87 kBAdobe PDF    Request a copy

Items in Open Research are protected by copyright, with all rights reserved, unless otherwise indicated.

Updated:  20 July 2017/ Responsible Officer:  University Librarian/ Page Contact:  Library Systems & Web Coordinator