Context tree maximizing reinforcement learning
Recent developments in reinforcement learning for non-Markovian problems witness a surge in history-based methods, among which we are particularly interested in two frameworks, ΦMDP and MC-AIXI-CTW. ΦMDP attempts to reduce the general RL problem, where
|Collections||ANU Research Publications|
|Source:||Proceedings of the National Conference on Artificial Intelligence|
|01_Nguyen_Context_tree_maximizing_2012.pdf||501.87 kB||Adobe PDF||Request a copy|
Items in Open Research are protected by copyright, with all rights reserved, unless otherwise indicated.