### A Problem with Nash Equilibrium

I've been teaching a little game theory in a course that surveys a variety of analytic methods of possible use to lawyers, and it occurred to me that a problem with the idea of Nash Equilibrium in oligopoly, which I have discussed in the past, is actually a more general problem.

The idea of Nash equilibrium is for the player of a multi-player game to observe the strategies all other players are following and then choose the strategy that is best for him given what they are doing. In other words, he freezes their behavior, assuming they will do the same thing whatever he does.

The problem is that my choices affect what choices are available to you. It is not, in general, possible for all other players to keep doing the same thing whatever I do--some of the things I might do would make things they might be doing impossible. Hence in defining Nash equilibrium we must implicitly assume, not that other players don't react, but that they react in some specified way, something we can describe as following the same strategy in the differing conditions corresponding to different choices I might make. There is no theoretical basis for deciding what that specified way is, hence Nash equilibrium is not clearly defined.

Consider an oligopoly. Each firm is producing a quantity and selling it at a price--all at the same price if the goods are perfect substitutes. If one firm changes the quantity it produces and sells, it is no longer possible for all the other firms to keep selling the same quantity as before at the same price as before.

We might define a strategy as a price and assume that when I change my price everyone else keeps the price he is charging the same. The result is Bertrand competition. As long as price is above cost, it pays a firm to charge a penny less than everyone else so as to expand to the whole market, or at least as much as it can produce--for simplicity assume constant costs. So the equilibrium is price equal cost.

Alternatively, we might define a strategy as a quantity and assume that when I change the quantity I produce everyone else keeps his production constant; price then adjusts to the price at which total quantity demanded equals our summed production. The analysis of that problem is more complicated and yields a different result.

This is not a matter of having multiple Nash solutions, which is also a possibility. It's a matter of not knowing what the Nash solution is until you make an essentially arbitrary definition of a strategy.

I wouldn't be surprised if all this is familiar to people who spend more of their time than I do on game theory, but it isn't mentioned in the text I use and, off hand, I don't remember seeing it in other texts.

The idea of Nash equilibrium is for the player of a multi-player game to observe the strategies all other players are following and then choose the strategy that is best for him given what they are doing. In other words, he freezes their behavior, assuming they will do the same thing whatever he does.

The problem is that my choices affect what choices are available to you. It is not, in general, possible for all other players to keep doing the same thing whatever I do--some of the things I might do would make things they might be doing impossible. Hence in defining Nash equilibrium we must implicitly assume, not that other players don't react, but that they react in some specified way, something we can describe as following the same strategy in the differing conditions corresponding to different choices I might make. There is no theoretical basis for deciding what that specified way is, hence Nash equilibrium is not clearly defined.

Consider an oligopoly. Each firm is producing a quantity and selling it at a price--all at the same price if the goods are perfect substitutes. If one firm changes the quantity it produces and sells, it is no longer possible for all the other firms to keep selling the same quantity as before at the same price as before.

We might define a strategy as a price and assume that when I change my price everyone else keeps the price he is charging the same. The result is Bertrand competition. As long as price is above cost, it pays a firm to charge a penny less than everyone else so as to expand to the whole market, or at least as much as it can produce--for simplicity assume constant costs. So the equilibrium is price equal cost.

Alternatively, we might define a strategy as a quantity and assume that when I change the quantity I produce everyone else keeps his production constant; price then adjusts to the price at which total quantity demanded equals our summed production. The analysis of that problem is more complicated and yields a different result.

This is not a matter of having multiple Nash solutions, which is also a possibility. It's a matter of not knowing what the Nash solution is until you make an essentially arbitrary definition of a strategy.

I wouldn't be surprised if all this is familiar to people who spend more of their time than I do on game theory, but it isn't mentioned in the text I use and, off hand, I don't remember seeing it in other texts.

## 11 Comments:

Introduction to Industrial Organization by Luis Cabral has a section on oligopolies that deals with Nash Equilibrium and whether a Bertrand or Cournot strategy is used.

It has some differentiators in that if the capacity is a long-run decision then Cournot is chosen, but if out-put can easily be changed then Bertrand is used.

This is a fairly simplified synopsis. The book has about a dozen pages dedicated to it.

Mirowski's Machine Dreams has an interesting analysis of NE. It's over the top (like everything in Mirowski), but, setting aside Nash's own idiosyncrasies, there's an important point in there.

Price taker assumptions are useful for the sake of mathematical tractability. The whole point of game theory was to get away from those to allow for analysis of strategic behavior.

But what does the Nash solution entail, if not "price taking" from the other players in the game? NE's popularity as a solution concept gained ascendancy fairly late in the game, probably for precisely this reason. It's an abandonment of the whole von Neumann project.

As I understand it, a Nash Equilibrium starts by assuming that all players' strategies are given. It doesn't promote "current quantity and price" to a strategy.

outcomes are not fixed, but are graded on a continuum which varies upon jurisdiction/skill of actors/judge/other

The game needs to be defined first (a set of players, a set of actions available for each player, set of payoffs for each outcome, information structure). Bertrand and Cournot are different games, so their Nash Equilibria are different is no surprise. The similarity is superficial. A small change in the definition of a game often changes the N.E. drastically, such as choosing what tie-breaking strategy is allowed for an auction or voting game (existence/uniqueness may change as well).

Cournot is a strategic model to illustrate market power. Bertrand is a strategic model to illustrate perfect competition.

One difficult notion to defend is the use of mixed strategies in one-shot games where evolutionary or 'facing heterogeneous groups of players' explanations do not apply.

Other solutions than Nash Equilibria are less popular but may be used: iterative dominance, rationalizability.

Vadim writes:

"The game needs to be defined first (a set of players, a set of actions available for each player, set of payoffs for each outcome, information structure)."

Consider the oligopoly case. The action available to a player is producing some quantity and offering it for sale at some price. That doesn't tell us what the "strategy" available to the player is, because implicit in a strategy is how a fixed strategy changes actions when something in the environment (in this case the strategy of another player) changes.

Or in other words, "strategy" here isn't a description of the situation the players are in, it's a description of how Nash equilibrium is being defined for that situation. The two oligopolistic equilibria are different solutions to the same situation with different definitions of Nash equilibrium.

Take a look at "On the Status of the Nash Type of Noncooperative Equilibrium in Economic Theory" by Leif Johansen, Scandinavian Journal of Economics, 1982, v. 84, iss. 3, pp. 421-41

The idea of Nash equilibrium is for the player of a multi-player game to observe the strategies all other players are following and then choose the strategy that is best for him given what they are doing. In other words, he freezes their behavior, assuming they will do the same thing whatever he does.I am amazed that no previous readers knew enough game theory to have pointed out the huge error here. This is the definition of a maximally exploitative strategy, *not* a Nash Equilibrium. In a Nash Equilibrium, behavior is not frozen. An NE is a pair of strategies for the 2 opponents such that neither of them can gain by changing their strategy. I think you can often get to a NE via a cascading set of maximal strategies (A comes up w/ a strategy based on B, B comes up with the maximal counter-strategy, iterate until neither strategy changes), I forget whether this is a guarantee or not.

Here is a cite:

http://en.wikipedia.org/wiki/Nash_equilibrium

Your point about one person's options limiting another is really a quite separate issue about how the game is formulated. It's a little subtle, let me see if I can do it justice in a short space.

Consider two types of games: simultaneous and alternate move. In a simultaneous move game, there has to be a payoff for the situation where the players choose a pair of strategies which you claim are incompatible. Right? Otherwise the game is ill-defined. So maybe both companies try to produce too many units, but end up having to throw away some of them - there is still a payoff associated with that. Or maybe they can cancel a production run, whatever. You can't define a game as simultaneous move, and then not have a payoff for some pair of moves - that doesn't fit the definition of a game.

If the game is alternate-move, then it's easy. The definition of a strategy is a response to the situation a player is faced with. So the strategy for the 2nd mover would be an allowed response for every possible move of the 1st mover, and thus would not include any impossible responses.

Patri points out that I misstated the definition of Nash equilibrium. I should have added that every player is acting as I described.

But I don't think that affects the point I was making. As to his more substantive point, it's true that we can define the game in a way which gives a solution--for instance assume that "strategy" is "produce quantity Q, sell it for whatever it will get" or "Set price at P, produce what you can sell at that price."

My point is that nothing in the real world situation tells us which of those alternatives (among others) we should choose.

This is certainly not "a problem with Nash equilibrium". If anything, it's a problem with game theory.

Patri Friedman has already pointed out your misconception of Nash equilibrium. More importantly, and Patri already hinted at that as well, I think, is that Betrand and Cournot competition are two different games. The crucial different, as far as I see it, is not that the decision variable is either price or quantity, but rather the behavior of the buyers. Implicitly, in the Bertrand model, buyers actively search for the cheapest good. Cournot competition does not really say much about actual price formation and market clearing mechanism (tatonement process, maybe).

I think this isn't a problem with Nash equilibrium, but with economic modeling. Neither Cournot nor Bertrand is remotely close to right: If you wanted a model that played more like reality, it would incorporate multiple decisions over time, stock and flow of the product, product differentiability, distinct retail outlets, and limited consumer information sets. We use simple models because the math is combinatorially explosive. And yes, that's upsetting. But that's a consideration distinct from the question of whether our tools are suitable to the models we apply them to.

Post a Comment

## Links to this post:

Create a Link

<< Home