Skip navigation
Skip navigation

Experiments with Infinite-Horizon, Policy-Gradient Estimation

Baxter, Jon; Bartlett, Peter; Weaver, L

Description

In this paper, we present algorithms that perform gradient ascent of the average reward in a partially observable Markov decision process (POMDP). These algorithms are based on GPOMDP, an algorithm introduced in a companion paper (Baxter & Bartlett, 2001), which computes biased estimates of the performance gradient in POMDPs. The algorithm's chief advantages are that it uses only one free parameter β ∈ [0, 1), which has a natural interpretation in terms of bias-variance trade-off, it requires...[Show more]

dc.contributor.authorBaxter, Jon
dc.contributor.authorBartlett, Peter
dc.contributor.authorWeaver, L
dc.date.accessioned2015-12-10T23:11:57Z
dc.identifier.issn1076-9757
dc.identifier.urihttp://hdl.handle.net/1885/63908
dc.description.abstractIn this paper, we present algorithms that perform gradient ascent of the average reward in a partially observable Markov decision process (POMDP). These algorithms are based on GPOMDP, an algorithm introduced in a companion paper (Baxter & Bartlett, 2001), which computes biased estimates of the performance gradient in POMDPs. The algorithm's chief advantages are that it uses only one free parameter β ∈ [0, 1), which has a natural interpretation in terms of bias-variance trade-off, it requires no knowledge of the underlying state, and it can be applied to infinite state, control and observation spaces. We show how the gradient estimates produced by GPOMDP can be used to perform gradient ascent, both with a traditional stochastic-gradient algorithm, and with an algorithm based on conjugate-gradients that utilizes gradient information to bracket maxima in line searches. Experimental results are presented illustrating both the theoretical results of Baxter and Bartlett (2001) on a toy problem, and practical aspects of the algorithms on a number of more realistic problems.
dc.publisherMorgan Kauffman Publishers
dc.sourceJournal of Artificial Intelligence Research
dc.titleExperiments with Infinite-Horizon, Policy-Gradient Estimation
dc.typeJournal article
local.description.notesImported from ARIES
local.description.refereedYes
local.identifier.citationvolume15
dc.date.issued2001
local.identifier.absfor010303 - Optimisation
local.identifier.ariespublicationMigratedxPub862
local.type.statusPublished Version
local.contributor.affiliationBaxter, Jon, College of Engineering and Computer Science, ANU
local.contributor.affiliationBartlett, Peter, College of Engineering and Computer Science, ANU
local.contributor.affiliationWeaver, L, College of Engineering and Computer Science, ANU
local.description.embargo2037-12-31
local.bibliographicCitation.startpage351
local.bibliographicCitation.lastpage381
dc.date.updated2015-12-10T09:25:21Z
local.identifier.scopusID2-s2.0-0013495368
CollectionsANU Research Publications

Download

File Description SizeFormat Image
01_Baxter_Experiments_with_2001.pdf246.58 kBAdobe PDF    Request a copy


Items in Open Research are protected by copyright, with all rights reserved, unless otherwise indicated.

Updated:  17 November 2022/ Responsible Officer:  University Librarian/ Page Contact:  Library Systems & Web Coordinator