Temporal Difference Updating without a Learning Rate
Loading...
Date
Authors
Hutter, Marcus
Legg, Shane
Journal Title
Journal ISSN
Volume Title
Publisher
Neural Information Processing Systems Foundation
Abstract
We derive an equation for temporal difference learning from statistical principles.
Specifically, we start with the variational principle and then bootstrap to produce
an updating rule for discounted state value estimates. The resulting equation is
similar to the standard equation for temporal difference learning with eligibility
traces, so called TD(λ), however it lacks the parameter α that specifies the
learning rate. In the place of this free parameter there is now an equation for the
learning rate that is specific to each state transition. We experimentally test this
new learning rule against TD(λ) and find that it offers superior performance in
various settings. Finally, we make some preliminary investigations into how to
extend our new temporal difference algorithm to reinforcement learning. To do
this we combine our update equation with both Watkins’ Q(λ) and Sarsa(λ) and
find that it again offers superior performance without a learning rate parameter.
Description
Citation
Collections
Source
Type
Book Title
Advances in Neural Information Processing Systems 20