Institute of Information Theory and Automation

You are here

Bibliography

Journal Article

Nash Q-learning agents in Hotelling's model: Reestablishing equilibrium

Vainer J., Kukačka Jiří

: Communications in Nonlinear Science and Numerical Simulation vol.99, 105805

: PRIMUS/19/HUM/17, Univerzita Karlova, UNCE/HUM/035, Univerzita Karlova

: Hotelling’s location model, Agent-based simulation, Reinforcement learning, Nash Q-learning

: 10.1016/j.cnsns.2021.105805

: http://library.utia.cas.cz/separaty/2021/E/kukacka-0542311.pdf

: https://www.sciencedirect.com/science/article/pii/S1007570421001167

(eng): This paper examines adaptive agents’ behavior in a stochastic dynamic version of the Hotelling’s location model. We conduct an agent-based numerical simulation under the Hotelling’s setting with two agents who use the Nash Q-learning mechanism for adaptation. This allows exploring what alternations this technique brings compared to the original analytic solution of the famous static game-theoretic model with strong assumptions imposed on players. We discover that under the Nash Q-learning and quadratic consumer cost function, agents with high enough valuation of future profits learn behavior similar to aggressive market strategy. Both agents make similar products and lead a price war to eliminate their opponent from the market. This behavior closely resembles the Principle of Minimum Differentiation from Hotelling’s original paper with linear consumer costs. However, the quadratic consumer cost function would otherwise result in the maximum differentiation of production in the original model. Thus, the Principle of Minimum Differentiation can be justified based on repeated interactions of the agents and long-run optimization.

: AH

: 50201

2019-01-07 08:39