The Sample-Complexity of General Reinforcement Learning
We present a new algorithm for general reinforcement learning where the true environment is known to belong to a finite class of N arbitrary models. The algorithm is shown to be near-optimal for all but O(N log2 N) time-steps with high probability. Infinite classes are also considered where we show that compactness is a key criterion for determining the existence of uniform sample-complexity bounds. A matching lower bound is given for the finite case.
|Collections||ANU Research Publications|
|Source:||The Sample-Complexity of General Reinforcement Learning|
|01_Lattimore_The_Sample-Complexity_of_2013.pdf||393.56 kB||Adobe PDF||Request a copy|
Items in Open Research are protected by copyright, with all rights reserved, unless otherwise indicated.