Boudewijn de Bruin - On Glazer and Rubinstein on Persuasion





Published as "On Glazer and Rubinstein on Persuasion." New Perspectives on Games and Interactions. Ed. Krzysztof R. Apt and Robert van Rooij. Amsterdam: Amsterdam UP, 2008. 141-150.

These were her internal persuasions: "Old fashioned notions; country hospitality; we do not profess to give dinners; few people in Bath do; Lady Alicia never does; did not even ask her own sister's family, though they were here a month: and I dare say it would be very inconvenient to Mrs Musgrove; put her quite out of her way. I am sure she would rather not come; she cannot feel easy with us. I will ask them all for an evening; that will be much better; that will be a novelty and a treat. They have not seen two such drawing rooms before. They will be delighted to come to-morrow evening. It shall be a regular party, small, but most elegant."

— Jane Austen, Persuasion

Boudewijn de Bruin

Boudewijn de Bruin is assistant professor in the Faculty of Philosophy of the University of Groningen. De Bruin did his undergraduate work in musical composition at Enschede, and in mathematics and philosophy at Amsterdam, Berkeley, and Harvard. He obtained his Ph.D. in philosophy from The Institute for Logic, Language, and Computation in Amsterdam. His doctoral dissertation, under supervision of Johan van Benthem and Martin Stokhof, was on the logic of game theoretic explanations. read more

Jacob Glazer and Ariel Rubinstein proffer an exciting new approach to analyze persuasion. Perhaps even without being aware of it, and at least not acknowledged in the bibliography, their paper addresses questions that argumentation theorists, logicians, and cognitive and social psychologists have been interested in since Aristotle's Rhetoric. Traditionally, argumentation was thought of as an activity involving knowledge, beliefs, opinions, and it was contrasted with bargaining, negotiation and other strategic activities involving coercion, threats, deception, and what have you. More recently, however, several theorists have argued that strict boundaries are conceptually indefensible and undesirable methodologically, separating as they do researchers who would more fruitfully combine efforts. Katia Sycara, for instance, writes that "persuasive argumentation lies at the heart of negotiation" (Sycara 1990), identifying various argumentation and negotiation techniques on the basis of careful empirical research on labor organizations. Simon Parsons, Carles Sierra, and Nick Jennings, by contrast, develop models of argumentation-based negotiation (Parsons, Sierra and Jennings 1998) with a high level of logical formality. Chris Provis, to mention a third, perhaps more skeptical representative, gives a systematic account of the distinction between argumentation and negotiation suggesting to locate persuasion right in the middle (Provis 2004).[1] Glazer and Rubinstein's work enriches this literature with an analysis of persuasion. Using machinery from a formal theory of negotiation par excellence, economic theory, they develop a model of persuasion problems in which a speaker desires a listener to perform a certain action. Informing the listener about the state of the world is the only thing the speaker can do, but he can do it in many ways. By strategically making a statement that maximizes the likelihood that the listener decides to perform the action, the speaker exploits a peculiar feature of the model, namely, that the speaker does not need to tell the listener the whole truth; the truth alone suffices. Persuasion, for Glazer and Rubinstein, is telling the truth strategically, and phrased in the framework of a novel methodology this new approach merits close attention.

A speaker, a listener, and a tuple (X, A, p, &sigma) with X a set of worlds (not necessarily finite), A &sube X, p a probability measure over X, and &sigma : X -> S a function mapping worlds to sets of proposition (symbols?) in S, that is all there is to a "persuasion problem." There is a certain action that the speaker wants the listener to perform. The listener wants to perform it just in case the actual world is an element of A. The speaker, by contrast, wants the listener to perform the action even if the actual world is a member of the complement R of A. Since the listener does not have full information about the world but only "initial beliefs . . . given by a probability measure p over X" (page 4), he is partly dependent on the speaker who has full knowledge of the world. Yet the speaker is under no obligation to report his full knowledge to the listener. The rules fixed by &sigma allow the speaker to make all of the statements contained in &sigma(x), if x is the actual world. Glazer and Rubinstein write that "the meaning of 'making statement s' is to present proof that the event &sigma-1(s) = { x | s &isin &sigma(x) } has occurred" (page 5). Strategically picking such an s characterizes persuasion: an s with a large &sigma-1(s) will generally do better than a small one. The "persuasion function" f : S -> [0, 1], moreover, is intended to capture "the speaker's beliefs about how the listener will interpret each of his possible statements" (page 5-6). The statement f(s) = q means that "following a statement s, there is a probability of q that the listener will be 'persuaded' and choose . . . the speaker's favored action" (page 6). The speaker solves the maximization problem

maxs &isin &sigma(x) f(s),

where x is the actual world.[2] From the perspective of the listener, if the speaker makes a statement t such that f(t) = maxs &isin &sigma(x) f(s) there is a probability &mux(f) that by using f he makes an error at x to perform an action while x ¬in A, or not to perform an action at x while x &isin A. These probabilities are given by

&mux(f) =

1 - maxs &isin &sigma(x) f(s)

if x &isin A

maxs &isin &sigma(x) f(s)


[1] I owe much to Chris Provis' exposition in (Provis 2004).

[2] If &sigma(x) is infinite, the supremum of the expression can be approached.

The listener chooses a persuasion rule that solves the minimization problem

minf : S -> [0, 1] &sumx &isin X p(x) &mux(f).

given his beliefs p.[3]

If this is a model, what does it model? Glazer and Rubinstein give several examples of persuasion problems. They bear names suggesting rather concrete applications ("The Majority of the Facts Support My Position," "I Have Outperformed the Population Average,") as well as technical, perhaps contrived, ones ("Persuading Someone that the Median is Above the Expected Value"). In the first example, the speaker tosses a coin five times in a row, and wants the listener to perform a certain action the listener wants to perform just in case the coin landed heads at least three times. If the persuasion problem is such that the speaker can show how the coin landed in three of the five cases, he will of course succeed in persuading the listener, provided the coin landed heads at least three times. More interestingly, the speaker may only be able to reveal the outcomes of two coin tosses. Given that the listener only wants to perform the action in case the coin landed heads at least three times, there is always a risk involved in acting on the basis of the information the speaker provides to him. The listener may consider the persuasion rule according to which he performs the action just in case the speaker demonstrates that the coin landed heads twice. Among the total of 32 possible outcomes of the experiment (HHHHH, THHHH, and so on), there are 10 in which the coin landed heads twice, not thrice, and this makes the error probability of this rule 10/32. The listener can improve if he adopts the persuasion rule to accept only if the coin landed heads twice in a row. This persuasion rules has error probability 5/32:

"An error in favor of the speaker will occur in the 4 states in which exactly two neighboring random variables [two successive coin tosses] support the speaker's position and in the state [HTHTH] in which the speaker will not be able to persuade the listener to support him even though he should." (page 7)

The example reveals a number of epistemic presuppositions behind the model Glazer and Rubinstein propose. Speaker and listener, for instance, have to know exactly what the rules of the game are. If the speaker does not know that he can reveal the outcomes of at most two successive coin tosses he will not consider the sequence in which the coin landed HTHTH as a problematic situation for him, and if the listener believes that the speaker may so be misinformed about the structure of the game he will also evaluate differently what is the optimal persuasion rule. One might even conjecture that as long as there is no common knowledge of the structure of the persuasion problem, game play is impossible. In addition, for the listener to calculate the error probabilities of the various persuasion rules he has to agree on the probability distribution of the relevant random variables. In the description of the formal model, Rubinstein and Glazer to that end insert a probability measure p over possible worlds with the initial beliefs of the listener as intended interpretation. The example, however, suggests that this probability is rather derived from the objective characteristics of set of possible worlds X, available to listener and speaker alike. Not only does the speaker, then, know what the actual world is in a situation in which the listener only has probabilistic beliefs concerning that issue, he also knows exactly what the listener believes about the world.

Nor is this all. While strictly speaking no condition of possibility for an application of the model, Glazer and Rubinstein suggest that the speaker not only knows the listener's beliefs, but also the listener's prospective choice of strategy. Given the fact that the speaker has access to the listener's beliefs p, it is routine for him to calculate an optimal persuasion rule, and assuming that the listener is in some sense rational, the speaker is quite justified in believing that the listener will choose that rule. There is an interesting proviso, though, for the meaningfulness of the definition of error probability depends not only on the fact that p expresses the listener's probabilistic beliefs concerning possible worlds, but also on the fact that the listener assumes that the speaker wants to maximize the likelihood that the listener perform the action. If the speaker did not want so to maximize, the listener would be unwise to build his risk estimation on the basis of the value of the solution to maxs &isin &sigma(x) f(s). The speaker, for his derivation of the persuasion rule, needs to believe that the listener believes the speaker to be rational.

For the speaker, it is quite clear how to motivate the rule of rationality embodied in his maximizing the probability of acceptance. If the speaker has non-probabilistic beliefs concerning the persuasion rule f adopted by the listener, the only thing he needs to do is to pick a statement s, and it makes much sense to choose one that maximizes expected acceptance. Conceptions of rationality such as maximin or minimax regret are out of place here. For the listener this may be a bit different. The listener wants to pick a persuasion rule f : S -> [0, 1], and the most direct constraint is that f favors assigning high probability in cases in which the actual world is an A-world, and low probability in cases in which it is an R-world. Without indication about how the elements from S relate to the A-ing or R-ing of the actual world, there is only one thing the listener could use to determine his strategy: his beliefs. If he believes with probability one that the world is in R, a reasonable persuasion rule assigns the value of zero (non-performance) to any statement made by the speaker. But the speaker knows what the actual world is, and the listener knows that the speaker knows it, so if the speaker makes a statement s with &sigma-1(s) &sube A to the effect that the world is definitely an A-world, then what should the listener do? This is a clear case of belief revision the model may not fully capture by assuming that the probabilities are objectively induced by the random variables determining X. For a rational choice of a persuasion rule the listener may not have enough information about the relation between the state of the world and the statement the speaker makes.

A conditional statement can be made, though. If the listener believes that the speaker knows what persuasion rule f the listener chooses, and the listener believes that the speaker is rational, then the listener believes that in his calculation of the optimal persuasion rule he can use the &mux(f) to assess the risk of making errors and solve minf : S -> [0, 1] \sumx &isin X p(x) &mux(f). Such a conditional statement, however, may delineate the applicability of the model in ways analogous to what we learn from the epistemic characterization of the Nash equilibrium (AumannBrandenburger 1995). To constitute a Nash equilibrium, knowledge of strategies is presupposed. If I know what you are playing, and I am rational, and if you know what I am playing, and you are rational, then we will end up in a Nash equilibrium. Such assumptions are not always problematic, for sure, but to justify making them requires in any case additional argumentation about, for instance, evolutionary (learning) mechanisms or repeated game play. As the turn to iterative solution concepts constitutes to some extent an answer to the epistemic problems with the Nash equilibrium, it may be interesting to investigate whether the model Glazer and Rubinstein put forward can similarly turn into the direction of common knowledge of game structure and rationality, especially if this can be accomplished in an extensive game framework. For consider the following dialog (election time in Italy):

Politician: Vote for me.

Citizen: Why?

Politician: If you vote for me, I'll create one million new jobs.

Citizen: That's unpersuasive.

Politician: If you vote for me, I'll fight for your freedom.

Citizen: Persuaded.

This dialog, due to Isabella Poggi, illustrates an theory of persuasion in terms of framing developed by Frederic Schick, among others (Poggi 2005, Schick 1988). While argumentation is about beliefs, and negotiation and bargaining are about desires and interests, persuasion, for Schick, involves the framing of options. A persuader persuades a persuadee by describing in novel and attractive terms an action the persuadee found unattractive under previous description:

"We [may] want something under one description and . . . not want it under another. We may even want a proposition true and want a coreportive proposition false . . ."

Persuasion is the attempt to change a person's understanding of something, to get him to see it in some way that prompts him to act as he would not have done." (Schick 1988, 368)

At first sight it may be too much to ask Rubinstein and Glazer to incorporate this insight, if an insight it is, in their formalism. Yet once we closely consider the way they set up the rules of the game, and in particular, the function they assign to function &sigma, there are in fact two ways to recommend.

[3] Lemma 1 says that whatever the cardinality of S, there is always a solution to this minimization problem if the persuasion problem is finite.

The set &sigma(x) contains exactly the statements that the speaker can make if the actual world happens to be x, and making a statement s amounts to demonstrating that the event &sigma-1(s) = { x | s &isin &sigma(x) } has occurred. As a result, there is no room for the speaker to provide false information, but there is quite some room to provide true information tactically and strategically. A bit informally put, if &sigma-1(s) = { x | s &isin &sigma(x) } contains many A-states and few R-states, then the speaker has good reasons to make the statement s, rather than another statement with less fortunate division between A and R.[4] From an extensional point of view, it suffices if &sigma maps worlds to sets of worlds. Propositions, under this extensional perspective, are nothing more than sets of worlds. Extensionally speaking, modeling framing seems pretty hopeless, though: a glass half full is the same as a glass half empty. From an intensional point of view, however, distinctions can be made between coextensive statements, and it is here that there is room for Glazer and Rubinstein to incorporate framing in their framework. The recipe is this. On the basis of a formal language, a set of statements S is defined from which the speaker may choose, and to make this set truly interesting, it has to be larger than p(X), the set of all statements possible with respect to X in purely extensional terms. To the description of the persuasion problem a relation of extensionality is added over the statements such that s ~ t iff &sigma-1(s) = &sigma-1(t). Define a preference relation inside the resulting equivalence classes to express the listener's preferences for differing descriptions, and make the persuasion rule dependent on the statements in a systematic way described in terms of the extensionality relation and the preferences of the speaker.

A rather different approach to framing is possible, too, one that is much closer to the actual model Glazer and Rubinstein put forward. The speaker, in the persuasion problem as the authors define it, has a lot of freedom to choose a statement s to give the listener information about the actual world. The only restriction is that the actual world be part of the set of worlds &sigma-1(s). Instead of locating framing in intensionally different but extensionally equivalent such sets, framing can also be modeled fully extensionally. Different statements s and t, each with the actual world in their &sigma inverse image, frame the actual world differently, and one could very well maintain that when the speaker selects what statement to make in Glazer and Rubinstein's model, he is already engaged in framing decisions. While in the intensional solution to framing the speaker would choose between making a statement in terms of the morning star and one in terms of the evening star, and opt for the latter because he knows that the listener is a night owl, in the alternative solution the speaker would describe the weather in terms of one of two extensionally different statements such as "the weather is good for sailing" and "the weather is good for kite surfing," depending on whether the listener likes sailing or kite surfing.

The simple substitution of freedom for jobs in the dialog, and of drawing rooms for country hospitality in the quotation from Persuasion, is an example of persuasion by framing, however simple or simplistic such a switch may be. The dialog, and Austen's stream of consciousness avant la lettre, point at another important aspect of persuasion, too: its temporal and sequential character. Argumentation theorists and logicians alike have noticed that the persuasive force one can exercise on others often depends on the order in which one presents one's arguments, offers, opinions, and threats. In dialogical logic, for instance, a proponent defends a proposition against an opponent who may attack according to clearly described rules. In its early days, dialogical logic was used to promote intuitionist logic a la Heyting, or even to give it firm conceptual grounds. Contemporary dialogical logicians, however, see themselves engaged in building a "Third Way" alternative to syntactic and semantic investigations of logical consequence; or in one word, pragmatics.

To get some feel for the kind of models used here, this is a dialog argument to the effect that (&phi -> &psi) & &phi) -> &psi is a tautology, due to (Rueckert 2001):

Proponent: ((&phi -> &psi) & &phi) -> &psi

Opponent: Well, what if &phi -> &psi) & &phi?

Proponent: I'll show you &psi in a minute. But wait, if you grant (&phi -> &psi) & &phi, then I ask you to grant the left conjunct.

Opponent: No problem, you get your &phi -> &psi

Proponent: And what about the right conjunct?

Opponent: That one you get, too, &phi.

Proponent: Well, if you say &phi, I may say &phi to question you assuming the implication &phi -> &psi.

Opponent: Right, I see what you're aiming at: you want me to say &psi, and I'll admit that &psi.

Proponent: Perfect, that means I have shown you &psi in response to your initial query: ipse dixisti!

Glazer and Rubinstein's approach to persuasion is decidedly static as it stands, but I believe that it can be turned dynamic at relatively low costs. A first step to consider is to take the probability distribution p as an expression of the truly subjective beliefs of the listener. This has the advantage that belief revision policies can be described to deal with cases in which the speaker comes up with new information, contradicting the listener's beliefs. In general, the listener may stubbornly stick to his p, but in more interesting persuasion problems the listener will revise his beliefs because, as it may be assumed, he knows that, however tactically and strategically the speaker will speak, he will at least speak the truth. In a dynamic setting, furthermore, there may be more room for less heavy epistemic assumptions. To put it bluntly, my guess is that once persuasion games are represented as extensive games, common knowledge of game structure and rationality suffices to derive optimal persuasion rules. To my mind, this would constitute an increase in realism.

An additional advantage is that extensive models can also take care of Aristotelian analyses of persuasion. In the Rhetoric Aristotle distinguished three ways in which speakers can persuade their listeners. The rational structure of what the speaker says, the logos, first of all contributes to the persuasive force. Then the character of the speaker, his ethos, determines how credible and trustworthy the listener will judge the speaker, while, finally, the emotional state of the listener, the pathos, plays a role in how a certain speech is received. Compare: a well-organized defense by a lawyer of established reputation in a law court with a serious and objective judge, with: a messy argument by a shabby lawyer directed at a judge involved in the case itself. And Aristotle is still highly popular, even among empirically oriented researchers. Isabella Poggi, for instance, agreeing with Schick about the role of framing in persuasion, sees expressions of rationality, credibility, and emotionality as the modern analogs of Aristotle's tripartite division, and gives them all the force in her theory of persuasion as hooking the speaker's goals to (higher) goals of the listener. In the Italian dialog, for instance, the speaker's goal that the listener votes for him was, first, hooked to diminishing unemployment. The goal of having a job, however, turned out not to be very important to the listener, and therefore another goal was used, the more general one of freedom, of which the speaker had reason to believe that it would arouse the listener's emotions. At the end, the speaker in fact succeeded persuading (or so the story goes).

[4] This is rough since it ignores the probabilistic judgments p the listener will invoke to calculate his error probability.

Using the suggested extensional way of modeling framing, pathos can be captured by the preference relations the listener has over various descriptions of the actual world. Speaker's beliefs about such preferences can be included to describe specific persuasion strategies the speaker may wish to follow. The speaker is expected to try to describe the world in a way that makes it most attractive for the listener to perform the action but in order to be able to do that, the speaker needs to have some information concerning the listener's preferences.[5] Assumptions about general human preferences (concerning freedom, recognition, or what have you) make it possible for the speaker to do that without information about the specific listener. Ethos is captured by the belief revision policies of the listener. If the listener readily revises his beliefs upon hearing statements that contradict his own opinions, he reveals to trust the speaker showing the character of the speaker as a dependable person are at work. More skeptical belief revision policies, in all kinds of gradations, reveal the speaker's ethos to be functioning less than optimally. Extensive games can also model ways in which the speaker iteratively tries out reframing the description of the actual world. He may find out that the listener does not like sailing, so it does not help him to describe the world as one that is optimal for sailing. In several models of persuasion, the listener's preferences play a crucial role. Logos, finally, gets modeled once speakers may take clever and less clever steps in iterative persuasion games, and it is especially here that cooperation with game theoretic approaches to logic (of which dialogical logic is only one among many) can be very fruitful (VanBenthem 2007).


Robert Aumann and Adam Brandenburger. Epistemic Conditions for Nash Equilibrium. Econometrica, 63(5):1161-1180, 1995.

Johan van Benthem. Logic in Games. 2007. ms.

Isabella Poggi. The Goals of Persuasion. Pragmatics and Cognition, 13(2):297-336, 2005.

Chris Provis. Negotiation, Persuasion and Argument. Argumentation, 18(1):95-112, 2004.

Simon Parsons, Carles Sierra, and Nick Jennings. Agents that Reason and Negotiate by Arguing. Journal of Logic and Computation, 8(3):261, 1998.

Christof Rapp. Aristotle's Rhetoric. Stanford Encyclopedia of Philosophy, 2002. (accessed July 12, 2007).

H. Ruckert. Why dialogical logic? In Heinrich Wansing, editor, Essays on Non-Classical Logic, volume 1 of Advances in Logic, chapter 7, pages 165-185. World Scientific Publishing, New Jersey, 2001.

Frederic Schick. Coping with Conflict. The Journal of Philosophy, 85(7):362-375, 1988.

Katia Sycara. Persuasive argumentation in negotiation. Theory and Decision, 28(3):203-242, May 1990.

[5] Aristotle saw persuasion as directed at judgements rather than at actions, though (Rapp 2002).