Institute of Information Theory and Automation

You are here

Bibliography

Conference Paper (international conference)

Total reward variance in discrete and continuous time Markov chains

Sladký Karel, van Dijk N. M.

: Operations Research Proceedings 2004, p. 319-326 , Eds: Fleuren H., den Hertog D., Kort P.

: Operations Research 2004, (Tilburg, NL, 01.09.2004-03.09.2004)

: CEZ:AV0Z10750506

: GA402/02/1015, GA ČR, GA402/04/1294, GA ČR

: Markov reward processes with finite state space, expectation and variance of cumulative rewards

(eng): This note studies the variance of total cumulative rewards for Markov reward chains in both discrete and continuous time. It is shown that parallel results can be obtained for both cases. First, explicit formulae are presented for the variance within finite time. Next, the infinite time horizon is considered. Most notably, it is concluded that the variance has a linear growth rate. Explicit expressions are provided, related to the standard average reward case, to compute this growth rate.

(cze): V praci se studuje rozptyl celkoveho vynosu pro markovske procesy s ohodnocenim s diskretnim i spojitym casovym parametrem. Je ukazano, ze pro oba typy modelu lze ziskat obdobne vysledky. Nejprve jsou odvozeny explicitini vzorce to rozptyl celkoveho vynosu v konecnem case. Dale je studovan pripad s nekonecnym planovacim horizontem. Je ukazano, ze pro spojity i diskretni casovy parametr rust celkoveho rozptylu je asymptoticky linearni. Jsou nalezeny explicitni vyrazy pro vypocet rychlosti rustu.

: 12B

: BB

2019-01-07 08:39