Learning Game Representations from Data Using Rationality Constraints

Gao, X. and Pfeffer, A.

Conference on Uncertainty in Artificial Intelligence (UAI 2010)Catalina Island, California (July 2010)

While game theory is widely used to model strategic interactions, a natural question is where do the game representations come from? One answer is to learn the representations from data. If one wants to learn both the payoffs and the players’ strategies, a naive approach is to learn them both directly from the data. This approach ignores the fact the players might be playing reasonably good strategies, so there is a connection between the strategies and the data. The main contribution of this paper is to make this connection while learning. We formulate the learning problem as a weighted constraint satisfaction problem, including constraints both for the fit of the payoffs and strategies to the data and the fit of the strategies to the payoffs. We use quantal response equilibrium as our notion of rationality for quantifying the latter fit. Our results show that incorporating rationality constraints can improve learning when the amount of data is limited.

For More Information

To learn more or request a copy of a paper (if available), contact Avi Pfeffer.

(Please include your name, address, organization, and the paper reference. Requests without this information will not be honored.)