Skip navigation
Skip navigation

Extreme State Aggregation beyond MDPs

Hutter, Marcus

Description

We consider a Reinforcement Learning setup without any (esp. MDP) assumptions on the environment. State aggregation and more generally feature reinforcement learning is concerned with mapping histories/raw-states to reduced/aggregated states. The idea behind both is that the resulting reduced process (approximately) forms a small stationary finite-state MDP, which can then be efficiently solved or learnt. We considerably generalize existing aggregation results by showing that even if the...[Show more]

CollectionsANU Research Publications
Date published: 2014
Type: Conference paper
URI: http://hdl.handle.net/1885/58160
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Volume 8776
DOI: 10.1007/978-3-319-11662-4_14

Download

File Description SizeFormat Image
01_Hutter_Extreme_State_Aggregation_2014.pdf306.3 kBAdobe PDF    Request a copy


Items in Open Research are protected by copyright, with all rights reserved, unless otherwise indicated.

Updated:  20 July 2017/ Responsible Officer:  University Librarian/ Page Contact:  Library Systems & Web Coordinator