Discrete MDL predicts in total variation
The Minimum Description Length (MDL) principle selects the model that has the shortest code for data plus model. We show that for a countable class of models, MDL predictions are close to the true distribution in a strong sense. The result is completely general. No independence, ergodicity, stationarity, identifiability, or other assumption on the model class need to be made. More formally, we show that for any countable class of models, the distributions selected by MDL (or MAP) asymptotically...[Show more]
|Collections||ANU Research Publications|
|Source:||Proceedings of The 23rd Annual Conference on Neural Information Processing Systems (NIPS 23)|
|01_Hutter_Discrete_MDL_predicts_in_total_2009.pdf||286.16 kB||Adobe PDF||Request a copy|
|02_Hutter_Discrete_MDL_predicts_in_total_2009.pdf||16.13 kB||Adobe PDF||Request a copy|
Items in Open Research are protected by copyright, with all rights reserved, unless otherwise indicated.