Přejít k hlavnímu obsahu


Conference Paper (international conference)

Balancing Exploitation and Exploration via Fully Probabilistic Design of Decision Policies

Kárný Miroslav, Hůla František

: Proceedings of the 11th International Conference on Agents and Artificial Intelligence, p. 857-864 , Eds: Rocha A., Steels L., van den Herik J.

: International Conference on Agents and Artificial Intelligence, (Praha, CZ, 20190219)

: GA16-09848S, GA ČR, GA18-15970S, GA ČR

: exploitation, exploration, adaptive systems, Bayesian estimation, fully probabilistic design, Markov decision process

: 10.5220/0007587208570864

: http://library.utia.cas.cz/separaty/2019/AS/hula-0503817.pdf

(eng): Adaptive decision making learns an environment model serving a design of a decision policy. The policy-generated actions influence both the acquired reward and the future knowledge. The optimal policy properly balances exploitation with exploration. The inherent dimensionality curse of decision making under incomplete knowledge prevents the realisation of the optimal design. This has stimulated repetitive attempts to reach this balance at least approximately. Usually, either: (a) the exploitative reward is enriched by a part reflecting the exploration quality and a feasible approximate certainty-equivalent design is made, or (b) an explorative random noise is added to the purely exploitative actions. This paper avoids the inauspicious (a) and improves (b) by employing the non-standard fully probabilistic design (FPD) of decision policies, which naturally generates random actions. Monte-Carlo experiments confirm its achieved quality. The quality stems from methodological contributions, which include: (i) an improvement of the relation between FPD and standard Markov decision processes, (ii) a design of an adaptive tuning of an FPD-parameter. The latter also suits for the tuning of the temperature in both simulated annealing and Boltzmann’s machine.

: BC

: 10201