Classical statistical learning theory studies the generalisation performance of machine learning algorithms rather indirectly. One of the main detours is that algorithms are studied in terms of the hypothesis class that they draw their hypotheses from. In this paper, motivated by the luckiness framework of Shawe-Taylor et al. (1998), we study learning algorithms more directly and in a way that allows us to exploit the serendipity of the training sample. The main di erence to previous...[Show more]
|Collections||ANU Research Publications|
|Source:||Advances in Neural Information Processing Systems 14|
|Herbrich_Algorithmic2002errata.pdf||98.86 kB||Adobe PDF|
|Herbrich_Algorithmic2002.pdf||404.95 kB||Adobe PDF|
Items in Open Research are protected by copyright, with all rights reserved, unless otherwise indicated.