Skip navigation
Skip navigation

Optimal regret bounds for selecting the state representation in reinforcement learning

Maillard, Odalric-Ambrym; Nguyen, Phuong; Ortner, Ronald; Ryabko, Daniil


We consider an agent interacting with an environment in a single stream of actions, observations, and rewards, with no reset. This process is not assumed to be a Markov Decision Process (MDP). Rather, the agent has several representations (mapping histories of past interactions to a discrete state space) of the environment with unknown dynamics, only some of which result in an MDP. The goal is to minimize the average regret criterion against an agent who knows an MDP representation giving the...[Show more]

CollectionsANU Research Publications
Date published: 2013
Type: Conference paper
Source: The Sample-Complexity of General Reinforcement Learning


File Description SizeFormat Image
01_Maillard_Optimal_regret_bounds_for_2013.pdf189.86 kBAdobe PDF    Request a copy

Items in Open Research are protected by copyright, with all rights reserved, unless otherwise indicated.

Updated:  19 May 2020/ Responsible Officer:  University Librarian/ Page Contact:  Library Systems & Web Coordinator