Skip navigation
Skip navigation

Composite binary losses

Reid, Mark; Williamson, Robert

Description

We study losses for binary classification and class probability estimation and extend the understanding of them from margin losses to general composite losses which are the composition of a proper loss with a link function. We characterise when margin losses can be proper composite losses, explicitly show how to determine a symmetric loss in full from half of one of its partial losses, introduce an intrinsic parametrisation of composite binary losses and give a complete characterisation of the...[Show more]

dc.contributor.authorReid, Mark
dc.contributor.authorWilliamson, Robert
dc.date.accessioned2015-12-10T23:05:53Z
dc.identifier.issn1532-4435
dc.identifier.urihttp://hdl.handle.net/1885/62547
dc.description.abstractWe study losses for binary classification and class probability estimation and extend the understanding of them from margin losses to general composite losses which are the composition of a proper loss with a link function. We characterise when margin losses can be proper composite losses, explicitly show how to determine a symmetric loss in full from half of one of its partial losses, introduce an intrinsic parametrisation of composite binary losses and give a complete characterisation of the relationship between proper losses and "classification calibrated" losses. We also consider the question of the "best" surrogate binary loss. We introduce a precise notion of "best" and show there exist situations where two convex surrogate losses are incommensurable. We provide a complete explicit characterisation of the convexity of composite binary losses in terms of the link function and the weight function associated with the proper loss which make up the composite loss. This characterisation suggests new ways of "surrogate tuning" as well as providing an explicit characterisation of when Bregman divergences on the unit interval are convex in their second argument. Finally, in an appendix we present some new algorithm-independent results on the relationship between properness, convexity and robustness to misclassification noise for binary losses and show that all convex proper losses are non-robust to misclassification noise.
dc.publisherMIT Press
dc.sourceJournal of Machine Learning Research
dc.source.urihttp://jmlr.csail.mit.edu/papers/v11/reid10a.html
dc.subjectKeywords: Bregman divergences; Classification; Classification-calibrated; Convexity; Fisher consistency; Misclassifications; Probability estimation; Proper scoring rules; Regret bound; Robustness; Surrogate loss; Estimation; Function evaluation; Probability Bregman divergence; Classification; Classification-calibrated; Convexity; Fisher consistency; Misclassification noise; Probability estimation; Proper scoring rule; Regret bound; Robustness; Surrogate loss
dc.titleComposite binary losses
dc.typeJournal article
local.description.notesImported from ARIES
local.identifier.citationvolume11
dc.date.issued2010
local.identifier.absfor080109 - Pattern Recognition and Data Mining
local.identifier.ariespublicationf2965xPUB710
local.type.statusPublished Version
local.contributor.affiliationReid, Mark, College of Engineering and Computer Science, ANU
local.contributor.affiliationWilliamson, Robert, College of Engineering and Computer Science, ANU
local.description.embargo2037-12-31
local.bibliographicCitation.startpage2387
local.bibliographicCitation.lastpage2422
local.identifier.absseo970108 - Expanding Knowledge in the Information and Computing Sciences
dc.date.updated2016-02-24T08:32:02Z
local.identifier.scopusID2-s2.0-78649418936
local.identifier.thomsonID000282523400001
CollectionsANU Research Publications

Download

File Description SizeFormat Image
01_Reid_Composite_binary_losses_2010.pdf689.58 kBAdobe PDF    Request a copy


Items in Open Research are protected by copyright, with all rights reserved, unless otherwise indicated.

Updated:  19 May 2020/ Responsible Officer:  University Librarian/ Page Contact:  Library Systems & Web Coordinator