Game Theory

By
M. Shane Smith

August 2003

Simple mathematical models can provide insight into complex societal relationships, by showing that mutual cooperation can benefit even mutually distrustful participants.

Game theory is a mathematical approach to studying decision making that can help explain and address social problems. It tends to focus, most often, on the choice between competition and cooperation.  Since games often reflect or share characteristics with real situations -- especially competitive or cooperative situations -- they can suggest strategies for dealing with such circumstances. Just as we may be able to understand the strategy of players in a particular game, we may also be able to predict how people, political factions, or states will behave in a given real situation.

Just as people generally try to win games, people also try to "win" or achieve their interests or goals in competitive situations. However, both in games and in the real world, we generally follow a set of rules to do this. Some games, like some real situations are "winner-take-all." These games are by their nature very competitive, as only one person can win. (Chess would be an example of such a game.) Other games, however, require cooperation to win. Many of the newer video games, for example, require cooperative strategies among multiple players in order for any single player to advance. In the real world, even during times of hostility, rivals generally have common interests and must cooperate to some degree.[1] Even during the Cold War, despite an intense East-West standoff, Moscow and Washington cooperated to achieve their common goal of averting a nuclear war.

What is Game Theory?

Game theory provides analytical tools for examining strategic interactions among two or more participants. By using simple, often numerical models to study complex social relations, game theory can illustrate the potential for, and risks associated with, cooperative behavior among distrustful participants. Though less familiar than typical board or video games, the lessons from these more abstract or hypothetical games are applicable to a wider array of social situations.

Games used to simulate real-life situations typically include five elements:

  1. players, or decision makers;
  2. strategies available to each player;
  3. rules governing players' behavior;
  4. outcomes, each of which is a result of particular choices made by players at a given point in the game; and
  5. payoffs accrued by each player as a result of each possible outcome.[2]

These games assume that each player will pursue the strategies that help him or her to achieve the most profitable outcome in every situation.

Real life is full of situations in which people -- intentionally or unintentionally -- pursue their own interests at the expense of others, leading to conflict or competition. Games used to illustrate these relationships often place the interests of two players in direct opposition: the greater the payoff for one player, the less for the other. In order to achieve a mutually productive outcome, the players must coordinate their strategies, because if each player pursues his or her greatest potential payoffs, the shared outcome is unproductive. This concept is illustrated below, using the Prisoner's Dilemma Game.

This and other games illustrate the potential for cooperation to produce mutually beneficial outcomes. However, they also highlight the difficulties of obtaining cooperation among distrustful participants, because each player is tempted to pursue his or her individual interests. Cooperation requires that both players compromise, and forego their individual maximum payoffs. Yet, in compromising, each player risks complete loss if the opponent decides to seek his or her own maximum payoff. Rather than risking total loss, players tend to prefer the less productive outcome.

Why is Game Theory Useful?

These models can provide insight into the strategic options and likely outcomes available to participants in particular situations. From this insight, decision-makers can better assess the potential effects of their actions, and can make decisions that will more likely produce the desired goals and avoid conflict.

For example, deterrence theory has guided U.S. defense strategy since the end of World War II. It assumes that a credible and significant threat of retaliation can curb an aggressor's behavior; if an individual believes that aggressive behavior may trigger an unacceptable and violent response from others, he or she is less likely to behave aggressively. The threat of reprisal does not directly reduce the probability of violence; instead, the perceived benefit of aggressive behavior decreases, in the face of probable retaliation. If two individuals recognize that their best interests lie in avoiding each other's retaliation, neither is likely to initiate hostilities. This was the guiding principle behind U.S.-Soviet relations during much of the Cold War.

The concept of mutual deterrence paved the way for arms-control measures and further cooperation. By highlighting strategic choices and potential collective outcomes, game theory helped illustrate how a potentially destructive relationship could be framed, managed, and transformed to provide mutual benefits, including avoidance of an uncontrolled arms race and nuclear war.

An Example of Game Theory: The Prisoner's Dilemma

The Prisoner's Dilemma, illustrated in Figure 1, is one of the best-known models in game theory. It illustrates the paradoxical nature of interaction between mutually suspicious participants with opposing interests.

igure 1. Possible outcomes for the Prisoner's Dilemma. The number in the upper triangle of each pair indicates the payoff for Player B; the lower triangle, Player A. Higher numbers represent greater payoff for the individual. The corresponding order of preference for these options decreases from 4 (most preferred) to 1 (least preferred).

In this hypothetical situation, two accomplices to a crime are imprisoned, and they forge a pact to not betray one another and not confess to the crime. The severity of the punishment that each receives is determined not only by his or her behavior, but also by the behavior of his or her accomplice. The two prisoners are separated and cannot communicate with each other. Each is told that there are four possible outcomes:

  1. If one confesses to the crime and turns in the accomplice (defecting from a pact with the accomplice), his sentence will be reduced.
  2. If one confesses while the accomplice does not (i.e. the accomplice cooperates with the pact to not betray each other), the first can strike a deal with the police, and will be set free. But the information he provides will be used to incriminate his accomplice, who will receive the maximum sentence.
  3. If both prisoners confess to the crime (i.e. both defect from their pact), then each receives a reduced sentence, but neither is set free.
  4. If neither confesses to the crime (i.e. they cooperate), then each receives the minimum sentence because of the lack of evidence. This option may not be as attractive to either individual as the option of striking a deal with the police and being set free at the expense of one's partner. Since the prisoners cannot communicate with each other, the question of whether to "trust" the other not to confess is the critical aspect of this game.

Although this is a simple model, its lessons can be used to examine more complex strategic interactions, such as arms races. If two antagonistic countries uncontrollably build up their armaments, they increase the potential for mutual loss and destruction. For each country, the value of arming itself is decreased because the costs of doing so -- financial costs, heightened security tensions, greater mutual destructive capabilities, etc. -- provide few advantages over the opponent, resulting in an unproductive outcome (2 to 2 in Figure 1). Each country has a choice: cooperate to control arms development, with the goal of achieving mutual benefits, or defect from the pact, and develop armaments.

The dilemma stems from the realization that if one side arms itself (defects) and the other does not (cooperates), the participant who develops armaments will be considered stronger and will win the game (the 4 to 1 outcome). If both cooperate, the best possible outcome is a tie (3 to 3). This is better than the payoff from mutual defection and an arms race (2 to 2), but it is not as attractive as winning, and so the temptation to out-arm one's opponent is always present. The fear that one's opponent will give in to such temptations often drives both players to arm; not doing so risks total loss, and the benefits of not arming can only be realized if one's opponent overcomes his or her temptation to win. Such trust is often lacking in the international environment.

The U.S.-Soviet relationship was a good example of this dynamic. For a long time, the two countries did not trust each other at all. Each armed itself to the hilt, fearing that the other one was doing so, and not wanting to risk being vulnerable. Yet the cost of the arms race was so high that it eventually bankrupted the Soviet Union. Had the Soviets been willing to trust the U.S. more, and vice versa, much of the arms race could have been prevented, at tremendous financial and security savings for both nations, and indeed, the rest of the world.

The lessons initially drawn from the Prisoner's Dilemma can be discouraging. The game illustrates a zero-sum situation, in which one person must lose in order for the other to win. To keep from losing, each player is motivated to pursue a "winning" strategy. The collective result is unproductive, at best, and destructive, at worst.

A More Realistic Model: Extensions of the Prisoner's Dilemma

Few social situations can be modeled accurately by a single interaction. Rather, most situations result from a series of interactions over a long period of time. An extended version of the Prisoner's Dilemma scenario includes repeated interaction, which increases the probability of cooperative behavior.

The logic of this version of Prisoner's Dilemma suggests that a player's strategy (defect or cooperate) depends on his or her experience in previous interactions, and that that strategy will also affect the future behavior of one's opponent. The result is a relationship of mutual reciprocity; a player is likely to cooperate if his or her opponent previously demonstrated willingness to cooperate, and is unlikely to cooperate if the opponent previously did not. The knowledge that the game will be played again leads players to consider the consequences of their actions; one's opponent may retaliate or be unwilling to cooperate in the future, if one's strategy always seeks maximum payoffs at the expense of the other player.

In a computer-simulated experiment, Robert Axelrod demonstrated that the "winning" strategy in a repeated prisoner's dilemma is one that he terms "tit-for-tat."[3] This strategy calls for cooperation on the first move, and in each subsequent move, one chooses the behavior demonstrated by one's opponent in the previous round. Still, there is no "right" or best solution to the paradox presented by Prisoner's Dilemma. One lost round in a two-player game can be devastating for a player, and the temptation to defect always exists.


[1] Thomas C. Schelling, The Strategy of Conflict (Cambridge, MA: Harvard University Press, 1960), 4.

[2] Anatol Rapoport (ed.), Game Theory as a Theory of Conflict Resolution (Boston: D. Reidel Publishing Company, 1974), 1.

[3] Robert Axelrod, The Evolution of Cooperation (New York: Basic Books, 1984).


Use the following to cite this article:
Smith, M. Shane. "Game Theory." Beyond Intractability. Eds. Guy Burgess and Heidi Burgess. Conflict Information Consortium, University of Colorado, Boulder. Posted: August 2003 <http://www.beyondintractability.org/essay/prisoners-dilemma>.


Additional Resources