Learning without concentration for general loss functions

Loading...
Thumbnail Image

Date

Authors

Mendelson, Shahar

Journal Title

Journal ISSN

Volume Title

Publisher

Springer

Abstract

We study the performance of empirical risk minimization in prediction and estimation problems that are carried out in a convex class and relative to a sufficiently smooth convex loss function. The framework is based on the small-ball method and thus is suited for heavy-tailed problems. Moreover, among its outcomes is that a well-chosen loss, calibrated to fit the noise level of the problem, negates some of the ill-effects of outliers and boosts the confidence level—leading to a gaussian like behaviour even when the target random variable is heavy-tailed.

Description

Keywords

Citation

Source

Probability Theory and Related Fields

Book Title

Entity type

Access Statement

License Rights

Restricted until

2039-12-31