Asymptotic learnability of reinforcement problems with arbitrary dependence

Date

2006

Authors

Ryabko, Daniil
Hutter, Marcus

Journal Title

Journal ISSN

Volume Title

Publisher

Springer

Abstract

We address the problem of reinforcement learning in which observations may exhibit an arbitrary form of stochastic dependence on past observations and actions, i.e. environments more general than (PO) MDPs. The task for an agent is to attain the best possible asymptotic reward where the true generating environment is unknown but belongs to a known countable family of environments. We find some sufficient conditions on the class of environments under which an agent exists which attains the best asymptotic reward for any environment in the class. We analyze how tight these conditions are and how they relate to different probabilistic assumptions known in reinforcement learning and related fields, such as Markov Decision Processes and mixing conditions.

Description

Keywords

Keywords: Asymptotic stability; Decision theory; Intelligent agents; Markov processes; Problem solving; Asymptotic learnability; Markov Decision Processes (MDP); Reinforcement learning; Learning systems

Citation

Source

Proceedings of International Conference on Algorithmic Learning Theory (ALT 2006)

Type

Conference paper

Book Title

Entity type

Access Statement

License Rights

DOI

Restricted until

2037-12-31