Stationary Markov Equilibrium Strategies in Asynchronous Stochastic Games: Existence and Computation

We study Asynchronous Dynamic games and show that in games with a finite state space and finite action sets, one can obtain the pure strategy Markov perfect equilibrium by using a simple backward induction method when the time period for the game is finite. The equilibrium strategies for games with...

Full description

Saved in:
Bibliographic Details
Main Authors: Subir. K. Chakrabarti, Jianan Chen, Qin Hu
Format: Article
Language:English
Published: MDPI AG 2024-11-01
Series:Algorithms
Subjects:
Online Access:https://www.mdpi.com/1999-4893/17/11/490
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We study Asynchronous Dynamic games and show that in games with a finite state space and finite action sets, one can obtain the pure strategy Markov perfect equilibrium by using a simple backward induction method when the time period for the game is finite. The equilibrium strategies for games with an infinite horizon are then obtained as the point-wise limit of the equilibrium strategies of a sequence of finite horizon games, where the finite horizon games are truncated versions of the original game with successively longer time periods. We also show that if the game has a fixed <i>K</i>-period cycle, then there is a stationary Markov equilibrium. Using these results, we derive an algorithm to compute the equilibrium strategies. We test the algorithm in three experiments. The first is a two-player asynchronous game with three states and three actions. In the second experiment, we compute the equilibrium of a cybersecurity game in which there are two players, an attacker and a defender. In the third experiment, we compute the stationary equilibrium of a duopoly game with two firms that choose an output in alternate periods.
ISSN:1999-4893