Yliopiston etusivulle Suomeksi På svenska In English Helsingin yliopisto

Q-Learning and Enhanced Policy Iteration in Discounted Dynamic Programming

Show full item record

Avaa tiedosto Vie RefWorksiin
Title: Q-Learning and Enhanced Policy Iteration in Discounted Dynamic Programming
Author: Bertsekas, Dimitri P.; Yu, Huizhen
Belongs to series: Report LIDS - 2831 - Also as: Department of Computer Science Series of Publications C Report C-2010-10
Abstract: We consider the classical nite-state discounted Markovian decision problem, and we introduce a new policy iteration-like algorithm for fi nding the optimal Q-factors. Instead of policy evaluation by solving a linear system of equations, our algorithm requires (possibly inexact) solution of a nonlinear system of equations, involving estimates of state costs as well as Q-factors. This is Bellman's equation for an optimal stopping problem that can be solved with simple Q-learning iterations, in the case where a lookup table representation is used; it can also be solved with the Q-learning algorithm of Tsitsiklis and Van Roy [TsV99], in the case where feature-based Q-factor approximations are used. In exact/lookup table representation form, our algorithm admits asynchronous and stochastic iterative implementations, in the spirit of asynchronous/modi ed policy iteration, with lower overhead and more reliable convergence advantages over existing Q-learning schemes. Furthermore, for large-scale problems, where linear basis function approximations and simulation-based temporal di erence implementations are used, our algorithm resolves e ffectively the inherent difficulties of existing schemes due to inadequate exploration.
URI: http://hdl.handle.net/10138/17117
Date: 2010-06-15

Files in this item

Files Description Size Format View/Open
Enhanced_Policy_Iteration_BY.pdf 624.9Kb PDF View/Open
This item appears in the following Collection(s)

Show full item record

Search Helda


Advanced Search

Browse

My Account