Sufficiency of Markov policies for continuous-time Markov decision processes and solutions to Kolmogorov's forward equation for jump Markov processes

Feinberg, E A and Mandava, M and Shiryaev, A N (2013) Sufficiency of Markov policies for continuous-time Markov decision processes and solutions to Kolmogorov's forward equation for jump Markov processes. In: 52nd IEEE Conference on Decision and Control.

Full text not available from this repository. (Request a copy)

Abstract

In continuous-time Markov decision processes (CTMDPs) with Borel state and action spaces, unbounded transition rates, for an arbitrary policy, we construct a relaxed Markov policy such that the marginal distribution on the state-action pairs at any time instant is the same for both the policies. This result implies the existence of a relaxed Markov policy that performs equally to an arbitrary policy with respect to expected discounted and non-discounted total costs as well as average costs per unit time. The proof consists of two steps. The first step describes the properties of solutions to Kolmogorov's forward equation for jump Markov Processes. The second step applies these results to CTMDPs.

Item Type: Conference or Workshop Item (Paper)
Additional Information: The research article was published by the author with the affiliation of Stony Brook University.
Subjects: Information Systems
Operations Management
Date Deposited: 03 Apr 2019 12:09
Last Modified: 03 Apr 2019 12:09
URI: https://eprints.exchange.isb.edu/id/eprint/745

Actions (login required)

View Item
View Item