Přejít k hlavnímu obsahu
top

Bibliografie

Journal Article

Sample-Path Optimal Stationary Policies in Stable Markov Decision Chains with Average Reward Criterion

Cavazos-Cadena R., Montes-de-Oca R., Sladký Karel

: Journal of Applied Probability vol.52, 2 (2015), p. 419-440

: 171396, GA AV ČR

: Dominated Convergence theorem for the expected average criterion, Discrepancy function, Kolmogorov inequality, Innovations, Strong sample-path optimality

: 10.1239/jap/1437658607

: http://library.utia.cas.cz/separaty/2015/E/sladky-0449029.pdf

(eng): This work concerns discrete-time Markov decision chains with denumerable state and compact action sets. Besides standard continuity requirements, the main assumption on the model is that it admits a Lyapunov function m. In this context the average reward criterion is analyzed from the sample-path point of view. The main conclusion is that, if the expected average reward associated to m^2 is finite under any policy, then a stationary policy obtained from the optimality equation in the standard way is sample-path average optimal in a strong sense.

: BC