Skip navigation
Skip navigation

PAC bounds for discounted MDPs

Lattimore, Tor; Hutter, Marcus

Description

We study upper and lower bounds on the sample-complexity of learning near-optimal behaviour in finite-state discounted Markov Decision Processes (mdps). We prove a new bound for a modified version of Upper Confidence Reinforcement Learning (ucrl) with only cubic dependence on the horizon. The bound is unimprovable in all parameters except the size of the state/action space, where it depends linearly on the number of non-zero transition probabilities. The lower bound strengthens previous work by...[Show more]

CollectionsANU Research Publications
Date published: 2012
Type: Conference paper
URI: http://hdl.handle.net/1885/69046
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
DOI: 10.1007/978-3-642-34106-9_26

Download

There are no files associated with this item.


Items in Open Research are protected by copyright, with all rights reserved, unless otherwise indicated.

Updated:  17 November 2022/ Responsible Officer:  University Librarian/ Page Contact:  Library Systems & Web Coordinator