Boudewijn de Bruin - On the Narrow Epistemology of Game Theoretic Agents





Published as "On the Narrow Epistemology of Game Theoretic Agents." Games: Unifying Logic, Language, and Philosophy. Ed. Ondrej Majer, Ahti-Veikko Pietarinen and Tero Tulenheimo. Berlin: Springer, forthcoming.

Boudewijn de Bruin

Boudewijn de Bruin is assistant professor in the Faculty of Philosophy of the University of Groningen. De Bruin did his undergraduate work in musical composition at Enschede, and in mathematics and philosophy at Amsterdam, Berkeley, and Harvard. He obtained his Ph.D. in philosophy from The Institute for Logic, Language, and Computation in Amsterdam. His doctoral dissertation, under supervision of Johan van Benthem and Martin Stokhof, was on the logic of game theoretic explanations. read more

The main claim of this paper is that the epistemological presuppositions non-cooperative game theory makes about players of games are unacceptably narrow. Here, with 'epistemological' I do not intend to refer to the assumptions about the players' beliefs (about the game and about each others' rationality) that may or may not be sufficient to ensure that the outcome of the game satisfies some solution concept.[1] Rather, I use the term 'epistemological' in its philosophical sense to refer to those aspects of the players that have to do with the way in which they use evidence to form beliefs. The claim is then that game theory makes unacceptable assumptions about how players form beliefs about their opponents' prospective choice of action. Here, I do not intend to refer to the assumptions about how the players will or would change their beliefs during the game on the basis of information about the behavior of their opponents.[2] Rather, I wish to consider on the basis of what sorts of information the players form their beliefs and their belief revision policies. The claim is then that the evidence that the players are assumed to use to form their beliefs as well as their belief revision policies are of a peculiarly restricted and exclusive kind.

The structure of the paper is the following. First, I present a logical analysis of rational choice-theoretic and game-theoretic explanations of actions. It is then noted that game-theoretic explanations give rise to questions about the role of the beliefs of players in action explanation. I argue that epistemic characterization theorems are the only means to answer these questions adequately. I conclude by showing that it is precisely this kind of theorems that reveal the epistemological problems of game-theoretic agents. Throughout the paper, 'rational choice' theory is the theory of parametric interaction, of 'games against nature', and 'game theory' is the theory of strategic interaction.

RCT(D, C)D is a rational choice-theoretic model for C
GT(&Gamma, C)&Gamma is a game that models C
Ut(S, C, u)u is S's utility function in C
VNM(u)u satisfies the von Neumann and Morgenstern axioms
Prob(S, C, P)P is S's expectations in C
Kolm(P)P satisfies the Kolmogorov axioms
Perf(S, C, a)S performed a in C
Max(D, u, P, a)a solves the maximization problem of u and P in D
Nash(&Gamma, u, a)a is part of a Nash equilibrium of &Gamma with utility function u

Some preliminary logical analysis first. Suppose a rational choice theorist explains the action of some agent as maximizing expected utility. He — or, of course, she — would say something like:

Agent S maximizes his expected utility in choice situation C by performing action a.

I am grateful to Johan van Benthem and Martin Stokhof for many inspiring discussions concerning the topic of this paper, and to the participants of the 2004 Prague colloquium on Logic, Games, and Philosophy: Foundational Perspectives for fruitful debate. Thanks, too, to two anonymous referees.

[1] The so-called 'epistemic characterization' results that are, for instance, surveyed in Battigalli&Bonanno 1999.

[2] The so-called 'belief revision' policies studied in, for instance, Stalnaker 1996 and Stalnaker 1998.

What precisely does he say? Not aspiring completeness of analysis here, I distinguish an 'existential' reading from a 'universal' one. According to the existential reading, the theorist claims the existence of some rational choice-theoretic model D of choice situation C. He claims that agent S was the owner of some utility function u and some probability function P, and that S solved the maximization problem corresponding to u and P by performing action a.[3]

Or formally,

&exist D &exist u &exist P(&RCT(D, C) & Ut(S, C, u) & VNM(u) & Prob(S, C, P) & Kolm(P) & Perf(S, C, a) & Max(D, u, P, a)),

with notation as in Table 1. According to the universal reading, no existence claims about models are being made. Only the hypothetical claim is being made that if some rational choice-theoretic model D (with utility function u and probability function P) is a model of C, then action a solves the corresponding maximization problem. Or formally,

&forall D &forall u &forall P((RCT(D, C) & Ut(S, C, u) & VNM(u) & Prob(S, C, P) & Kolm(P) & Perf(S, C, a)) -> & Max(D, u, P, a)).

I believe that the universal reading is hardly acceptable as a representation of what a rational choice theorist does in explaining human action. It entails that agents maximize their expected utility even in cases in which they are motivated by completely different kinds of reasons. In cases where they fail to have von Neumann and Morgenstern utilities and Kolmogorov probabilities they make the antecedent vacuously true. In other words, the universal reading makes the explanatory task of the theorist too easy. Although I will disregard the universal reading in the sequel, the arguments presented in this paper would work mutatis mutandis for the universal reading as well.

In a similar way we easily obtain a logical analysis of game-theoretic explanations. Suppose a game-theorist describes an action of some agent in some choice situation thus:

Agent S performs action a in choice situation C according to game theory with the solution concept Nash.

Again an existential and a universal reading can be distinguished, and again the universal reading is too weak to be interesting. According to the existential reading, the game-theorist claims the existence of some game &Gamma that models C, and of some utility function u of which agent S is the owner. Mentioning the utility function separately is a bit superfluous, here, as strictly speaking it is already contained in the game. But I will stick to this redundancy to make the comparison between rational choice theory and game theory more transparent. Further, apart from the triviality that S really carried out action a, the theorist claims that it was part of a Nash equilibrium of &Gamma. Or formally,

&exist &Gamma &exist u (GT(&Gamma, C) & Ut(S, C, u) & VNM(u) & Perf(S, C, a) & Nash(&Gamma, u, a)).

Game-Theoretic Agents

[3] A distinction can be made between available actions and actions the agent knows to be available. But in order for a rational choice-theoretic or game-theoretic model to function properly, these two sets of actions have to coincide. I argued for this claim in de Bruin 2004. Cf., e.g., Hintikka 1996, 214-215.

Assuming the belief-desire framework of action explanation, a clear difference between rational choice and game-theoretic explanations of action emerges.[4]

Rational choice-theoretic explanations provide beliefs and desires. The beliefs are the probability measures P; the desires, the utility functions u. Not so for game-theoretic explanations. Quite clearly we learn something about the desires of the players because the existence of some von Neumann and Morgenstern utility function u is claimed by the game-theorist. But we are not informed about the beliefs of the players. The question is whether this is a problem. Let me survey some possible answers.

(i) The theorist might admit that indeed it would be nice if beliefs could be specified. But, not having the means to accomplish that, we should be happy that in the form of the utility structure we at least have something. This is unacceptable because very often we do have information about beliefs.

(ii) The theorist might say that no extra reasons are needed because the situation he describes is one in which the players blunder into a Nash equilibrium. This is unacceptable unless all game-theoretic aspirations are given up. What would be the function of mentioning the solution concept if it only accidentally fits the outcome?

(iii) One may say that, apart from his utility function, the fact that his action is a Nash action forms a reason for the agent to perform it. But either that is silly, or it is elliptic for the expression of some propositional attitude. It is silly (and unacceptable) if taken literally, because the sole fact that something is a Nash action cannot play a motivational role for the agent. This is so because motivations require propositional attitudes (desires to change the world in certain respects, beliefs about what the world looks like in certain respects). It is elliptic (and acceptable if ellipsis be removed adequately) if one takes it to express that the agent believed that playing Nash is the best way to satisfy his desires, or something like that. But this would have to be made more precise, and I will make it more precise below using epistemic characterization theorems.

(iv) The theorist might claim the existence of some dynamics and refer to some theorem from evolutionary game theory relating this dynamics to the Nash equilibrium. Although more sophisticated, this is again either silly, or elliptic. It is silly (and unacceptable) if taken literally, because the sole fact that some dynamics obtains cannot play a motivational role for the agent as motivations require propositional attitudes. It is elliptic (and still unacceptable) if one takes it to express that the agent believed that this dynamics obtained.

(v) The theorist would claim that evolution programs agents and that therefore no reference to reasons is needed. Agents as automata do not have reasons. They only have 'subpropositional' dispositions. This is consistent. But it is unacceptable if we wish to explain actions in terms of the reasons of the agents. One would have to doubt whether it is still actions that one explains. The difference between actions, reflexes, and so forth blurs.

(vi) The theorist could simply pick some probability distribution over the actions of S's opponents and make sure that action a maximizes expected utility (where the utility function is the one of which existence is being claimed). This is unacceptable because it is entirely ad hoc. All game-theoretic aspirations would be given up because the mentioning of an ad hoc probability distribution would not show why the Nash equilibrium (rather than another solution concept) figured in the explanation.

(vii) The theorist could pick some probability distribution over the actions of S's opponents and make sure that action a maximizes expected utility. If in addition the theorist could show that this probability distribution is not ad hoc, he would have given additional reasons for S's performing a in C. This is acceptable, but it has to be made more precise, and I will make it more precise below using epistemic characterization theorems.

(viii) The idea now is that a game-theorist who explains some action as an action that is part of a Nash equilibrium makes implicit reference to some beliefs of the players that are not ad hoc. How to avoid being ad hoc? By requiring some structural relation between the solution concept and the implicit beliefs.

A very obvious candidate for such a structural relation is presented by epistemic characterization theorems. They provide sufficient conditions for a solution concept to obtain. Epistemic characterization theorems are implications. The antecedent specifies conditions on the beliefs and desires of the players; the consequent states some conditions about the actions and the solution concept. For instance, the epistemic characterization of the Nash equilibrium is:

If (i) all players are rational, (ii) all players know their own utility function, and (iii) all players know what their opponents are going to play, then they play a Nash equilibrium.

[4]Other ways of action explanation would be, for instance, an 'existential phenomenology' a la Merleau-Ponty (see Merleau-Ponty 1945) or a method based on neurophysiology (cf. Bennett&Hacker2003).

Another well known example characterizes common knowledge of rationality and utility structure as sufficient for iterated strict dominance (in two-person normal form games) and backward induction (in any extensive form game).[5]

The usual way to think about such theorems is that they can be used by the game-theorist to justify the use of some solution concept in a specific explanation. The theorist may defend, for example, his use of the concept of iterated strict dominance to explain the behavior of some agent by stating that among other things common knowledge of rationality and utility structure obtains in the choice situation. What I suggest is that game-theoretic explanations should be read in general as making such claims. Whenever a game-theorist uses some solution concept, he should be taken to make the claim that the epistemic conditions of the corresponding characterization theorem obtain. My argument is that there is no alternative way to distill from a game-theoretic explanation the right kind of reasons for the agents in a uniform way. All alternatives I discussed have some problems. Either something is wrong with the motivational force of the alleged reasons (the issue about the propositional attitude), or they are ad hoc and fail to account for the necessity of using a solution concept in the first place.

[5] The epistemic characterization of the Nash equilibrium is due to Aumann Brandenburger 1995. Iterated strict dominance was dealt with, in various degrees of formality, by Bernheim 1984, Pearce 1984, and Spohn 1982. The locus classicus for backward induction is Aumann 1995. I am a bit sloppy in using common knowledge instead of common belief. See deBruin 2004 for details.

It is important to point out that it is feasible or consistent to require to use epistemic characterization results as the canonical way to the beliefs of the players. The most elegant way uses characterization theorems that are specified in terms of Stalnaker's 'game models.'[6]

To use the example of iterated strict dominance, it can not only be shown that common knowledge of rationality implies iterated strict dominance, but also that every outcome of iterated strict dominance of any game can result in a situation of common knowledge of rationality. In other words, given an arbitrary outcome of iterated strict dominance, the 'game models' approach enables us to sketch an epistemic setting in which (i) rationality and utility structure are common knowledge, and (ii) this very outcome is played. It is this converse direction that shows that the game-theorist is not committed to something infeasible. If he explains an action as, for instance, iteratively undominated, there is indeed a game playing situation in which there is common knowledge of rationality. To take the epistemic characterization results as the suppliers of beliefs then is a coherent assumption.

To sum up, the first claim is that the epistemic characterization theorems are the canonical way to the beliefs of the players in game-theoretic action explanation.


Game-theorists explain actions in terms of reasons. As reasons for some action of some agent they give von Neumann and Morgenstern utility functions and Kolmogorov probability measures. The former are given explicitly in the game-theoretic representation of the agent's choice situation. The latter are implicit, but can be obtained via epistemic characterization results. The logical analysis of game-theoretic explanations contains the element Ut(S, C, u) requiring that the utility function of which existence is claimed is in fact S's utility function in choice situation C; he has to be the owner of u so to speak. Of course, the same has to be true of the probability distribution implicitly referred to, and, of course, the action that has to be explained has to be a solution to the corresponding maximization problem. The agent has to maximize expected utility. Game-theoretic explanations and rational choice-theoretic explanations do not seem to be different then in principle. Both involve utility and probability, both involve maximization, and both involve the claims that the utility is the agent's utility, that the probability is the agent's probability, and that the agent solves a maximization problem. But these similarities are very deceptive.

Rational choice theorists and game-theorists, although they do need to bother addressing the ownership issue of the utility functions (the claim Ut(S, C, u), that is), do not need to explain such things as why S has the preferences he has (for instance by referring to S's education, or bourgeois background) or whether they are reasonable or not. Now compare in the same way rational choice theorists and game-theorists with respect to the agent's beliefs P. Rational choice theorists as well as game-theorists need to bother addressing the ownership of the probabilistic expectations (the claim Prob(S, C, P)). Rational choice theorists do not need to explain such things as why S has the beliefs he has (for instance by referring to S's practices of belief formation, his critical or narrow mind) or whether these beliefs are reasonable. But game-theorists do need to bother thinking about these questions. In fact, referring to epistemic characterization results to explain the very probability measure P is to answer these questions. The epistemic characterization results say that S has formed his beliefs on the basis of inspection of the game structure and on the basis of rationality considerations. And on the basis of nothing else. Incidentally, this idea can be traced back to von Neumann and Morgenstern's Theory of Games and Economic Behavior:

[6] First introduced in Stalnaker 1996.

Every participant can determine the variables which describe his own actions but not those of the others. Nevertheless those 'alien' variables cannot, from his point of view, be described by statistical assumptions. This is because the others are guided, just as he himself, by rational principles.[7]

[7] Von Neumann&Morgenstern 1944, 11.

To sum up, the second claim is that whenever the game theorist explains the behavior of some agent in truly game-theoretic terms, he is implicitly committed to the view that the agent, to form the beliefs necessary for his strategic deliberation, disregards all available information except what involves the game structure and the rationality of the players. Epistemic characterization theorems make this explicit.[8]


Yet the practice of discarding so much available information is an implausible, or simply bad, way of belief formation. It forbids players a large spectrum of possible evidence to base their beliefs on. It is not adequate as a description of how actual human beings reason, and it is even more inadequate as a theory of knowledge or scientific methodology. I will structure the argument by distinguishing these two cases, real and ideal agents.

The game-theorist's call to allow only truly game-theoretic information instead of exogenous or statistical data clearly puts a restriction on the possible sources of evidence players are allowed to invoke as reasons for their beliefs. The appeal of this call can be explained by looking at the abstract character of game-theoretic modeling. Indeed, if we abstract away from everything except the number of players, their possible actions, and their preferences, then there is not much exogenous or statistical information to be found. The game-theorist will not deny that the strategic choice situations he tries to model are concrete, and that real agents can actually use a large spectrum of concrete data for belief formation. Of course, all strategic choice situations game theory is concerned with are concrete and all game-theoretic models abstract. Abstraction is what happens everywhere in science, but nowhere in science too high a level of abstraction is good. The above logical analysis of game-theoretic explanations and the considerations about the specification of reasons for actions allow us to make precise statements about what it exactly is that gets lost in abstraction. By abstracting away from everything except the possible actions, the preferences, and the number of players, a game-theoretic model leaves unmodeled much of the evidence or data or information that real players will actually use to form beliefs about their opponents. This would not be a problem if, for instance, the origin of the beliefs did not matter. But, as we have seen, the origin of the beliefs matters crucially because without specification of the origin of the beliefs in terms of the epistemic characterization results game-theoretic explanations of human actions do not provide the reasons for actions in a systematic manner.

Let us briefly take stock. I started from the assumption that game-theoretic explanations have to conform to the belief-desire framework. The desires are clear. What about the beliefs? There are many ways to sneak in beliefs, but I showed that only via epistemic characterization results a systematic commitment to particular beliefs can be obtained. That is, I have argued that if you start from the assumption that game-theoretic explanations have to conform to the belief-desire framework, then there is no way, except by using epistemic characterization results, to get the beliefs the theorist is committed to ascribe to the players. The point now is that this entails a very specific origin of the beliefs. For instance for iterated strict dominance: common knowledge of rationality and nothing else.

As long as it stays within the realm of the game-theoretical, the specification of the origin of the beliefs can only be phrased in terms of those aspects of the strategic choice situation that survive the abstraction process. Clearly this results in a distorted model of belief formation. Whereas there is no hope, then, of dealing with belief formation in game theory in a way that does justice to the concreteness of the evidence, it could still be the case that what game theory assumes about belief formation is plausible from the perspective of some theory of knowledge (for 'ideal' epistemic agents, so to speak). In fact, a rather strange sort of theory would be the result: an interpretation of game theory as descriptive (ex post or ex ante) of the actions of agents, but as prescriptive about their beliefs. But this sounds too far fetched.

[8] The epistemic characterization of the Nash equilibrium allows for exogenous information. The antecedent requires knowledge of the actions of the opponents, and it is not excluded that this knowledge is formed on the basis of, for instance, statistical information. But this observation does not save the Nash equilibrium, because it is now only right to use the solution concept in cases in which the players have knowledge, as opposed to mere and possibly mistaken belief, about their opponents. And the role of knowledge, as opposed to belief, in explanations of actions is highly disputed. See, for more details, de Bruin 2004.

A feature that distinguishes knowledge from belief is that knowledge is necessarily true, and belief not. Another, that knowledge meets very high evidential standards, and belief not. This is the point of a hierarchy of 'Gettier examples', but not dependent on such examples.[9]

This does not mean, however, that anyone can believe anything without further qualifications. Senseless beliefs are no beliefs. If you say that you believe something, then you have to be able to give an answer to the question why you do so. In general people will try to answer such a question by presenting the interlocutor with what they think is good evidence for the belief. All in all, beliefs need reasons.

Applied to game theory, how should players (players who are ideal from the view point of some theory of knowledge) form beliefs? They should try to inspect their strategic choice situation in the most penetrating way possible; in particular, they should try to get as much information about their opponents as possible. They should be interested to hear something about the tradition in which their opponents were raised or the training they have had. They should try to determine the reliability of hearsay evidence and reported observations, and to sort out how to weigh such evidence in relation to their own observations. If available, they should attempt to interpret statistical surveys and consider other available exogenous data, and determine their relevance for their purposes. And, of course, they should try to find out as much as possible about the way their opponents try to form their beliefs. One thing, however, they should not do: to disregard possible sources of information, to eschew statistical or exogenous data, to avoid going beyond what is immediate in the situation, to be narrow minded and uncritical.

To sum up, the third claim is that by denying players access to any information except what is immediate from the game structure, game theory puts forward an epistemological claim that is inadequate as a description of real human beings, and implausible as a theory for epistemologically ideal agents.


Aumann, R. (1995), 'Backward Induction and Common Knowledge of Rationality,' Games and Economic Behavior, 8, 6-19.

Aumann, R. and Brandenburger, A. (1995), 'Epistemic Conditions for Nash Equilib- rium,' Econometrica, 63, 1161-1180.

Battigalli, P. and Bonanno, G. (1999), 'Recent Results on Belief, Knowledge and the Epistemic Foundations of Game Theory,' Research in Economics, 53, 149-225.

Bennett, M. and Hacker, P. (2003), Philosophical Foundations of Neuroscience, Malden, Blackwell Publishing.

Bernheim, B. (1984), 'Rationalizable Strategic Behavior,' Econometrica, 52, 1007- 1028.

de Bruin, B. (2004), 'Explaining Games: On the Logic of Game Theoretic Explana- tions,' Diss. (University of Amsterdam).

Gettier, E. (1963), 'Is Justified True Belief Knowledge?' Analysis, 23, 121-123.

Hintikka, J. (1996), The Principles of Mathematics Revisited, Cambridge, Cambridge University Press.

Merleau-Ponty, M. (1945), Phenomenologie de la perception, Paris, Librairie Galli- mard.

von Neumann, J. and Morgenstern, O. (1944), Theory of Games and Economic Be- havior, Princeton, Princeton University Press.

Pearce, D. (1984), 'Rationalizable Strategic Behavior and the Problem of Perfection,' Econometrica, 52, 1029-1050.

Spohn, W. (1982), 'How to Make Sense of Game Theory,' in: Balzer, W., Spohn, W. and Stegmuller, W., Studies in Contemporary Economics, Vol. 2: Philosophy of Economics, Berlin, Springer-Verlag, 239-270.

Stalnaker, R. (1996), 'Knowledge, Belief and Counterfactual Reasoning in Games,' Economics and Philosophy, 12, 133-163.

Stalnaker, R. (1998), 'Belief Revision in Games: Forward and Backward Induction,' Mathematical Social Sciences, 36, 31-56.

[9] Gettier 1963; and many articles along similar lines.