Learning with Symmetric Label Noise: The Importance of Being Unhinged
Convex potential minimisation is the de facto approach to binary classification. However, Long and Servedio  proved that under symmetric label noise (SLN), minimisation of any convex potential over a linear function class can result in classification performance equivalent to random guessing. This ostensibly shows that convex losses are not SLN-robust. In this paper, we propose a convex, classification-calibrated loss and prove that it is SLN-robust. The loss avoids the Long and Servedio...[Show more]
|Collections||ANU Research Publications|
|Source:||Reflection, Refraction and Hamiltonian Monte Carlo|
|Access Rights:||Open Access|
|01_Van+Rooyen_Learning_with_Symmetric_Label_2015.pdf||296.33 kB||Adobe PDF|
Items in Open Research are protected by copyright, with all rights reserved, unless otherwise indicated.