Skip navigation
Skip navigation

The loss rank principle for model selection

Hutter, Marcus

Description

A key issue in statistics and machine learning is to automatically select the "right" model complexity, e.g. the number of neighbors to be averaged over in k nearest neighbor (kNN) regression or the polynomial degree in regression with polynomials. We suggest a novel principle (LoRP) for model selection in regression and classification. It is based on the loss rank, which counts how many other (fictitious) data would be fitted better. LoRP selects the model that has minimal loss rank. Unlike...[Show more]

dc.contributor.authorHutter, Marcus
dc.date.accessioned2015-12-10T22:17:03Z
dc.identifier.isbn9783540729259
dc.identifier.urihttp://hdl.handle.net/1885/51229
dc.description.abstractA key issue in statistics and machine learning is to automatically select the "right" model complexity, e.g. the number of neighbors to be averaged over in k nearest neighbor (kNN) regression or the polynomial degree in regression with polynomials. We suggest a novel principle (LoRP) for model selection in regression and classification. It is based on the loss rank, which counts how many other (fictitious) data would be fitted better. LoRP selects the model that has minimal loss rank. Unlike most penalized maximum likelihood variants (AIC,BIC,MDL), LoRP only depends on the regression functions and the loss function. It works without a stochastic noise model, and is directly applicable to any non-parametric regressor, like kNN.
dc.publisherSpringer
dc.relation.ispartofLearning Theory
dc.relation.isversionof1st Edition
dc.rightsCopyright Information: © Springer-Verlag Berlin Heidelberg 2007. http://www.sherpa.ac.uk/romeo/issn/0302-9743/..."Author's post-print on any open access repository after 12 months after publication" from SHERPA/RoMEO site (as at 28/08/15)
dc.subjectKeywords: Computational complexity; Curve fitting; Learning systems; Polynomials; Regression analysis; Stochastic models; K nearest neighbor (kNN) regression; Loss rank principle; Model complexity; Model selection; Model checking
dc.titleThe loss rank principle for model selection
dc.typeBook chapter
local.description.notesImported from ARIES
dc.date.issued2007
local.identifier.absfor080199 - Artificial Intelligence and Image Processing not elsewhere classified
local.identifier.absfor080401 - Coding and Information Theory
local.identifier.absfor010405 - Statistical Theory
local.identifier.ariespublicationu8803936xPUB219
local.type.statusPublished Version
local.contributor.affiliationHutter, Marcus, College of Engineering and Computer Science, ANU
local.description.embargo2037-12-31
local.bibliographicCitation.startpage589
local.bibliographicCitation.lastpage603
dc.date.updated2016-02-24T11:43:42Z
local.bibliographicCitation.placeofpublicationBerlin, Germany
local.identifier.scopusID2-s2.0-38049041556
CollectionsANU Research Publications

Download

File Description SizeFormat Image
01_Hutter_The_loss_rank_principle_for_2007.pdf413.93 kBAdobe PDF    Request a copy
02_Hutter_The_loss_rank_principle_for_2007.pdf156.15 kBAdobe PDF    Request a copy
03_Hutter_The_loss_rank_principle_for_2007.pdf26.99 kBAdobe PDF    Request a copy
04_Hutter_The_loss_rank_principle_for_2007.pdf135.91 kBAdobe PDF    Request a copy


Items in Open Research are protected by copyright, with all rights reserved, unless otherwise indicated.

Updated:  17 November 2022/ Responsible Officer:  University Librarian/ Page Contact:  Library Systems & Web Coordinator