Exam Revision Flashcards Preview

EC546 Game Theory > Exam Revision > Flashcards

Flashcards in Exam Revision Deck (63)
Loading flashcards...

Define complete information games.

What are the two types? Define them.

Complete information games are where the player who turns it is to move knows at least as much as the last player who moved.

-Perfect information games: Players know the full history of the game and they know the moves made and the payoffs.

-imperfect information games: Players are unaware of the actions of others.

However, they do know who the other players are, their strategies and the payoffs of the other player.


Define incomplete information games.

Given two topic examples.

Players may not know about the information of the other players.

1) Bayesian Games
2) Principal-Agent relationships.


How do you find a bayesian-nash equilibrium?

it is the generalisation of the Nash equilibrium for incomplete information games.

1) turn the game from incomplete to imperfect information.
2) using an extra player 'nature' as a proxy for the state of the world for one player.
3) Then use the Nash concept to solve the game.


What are the probabilities associated with nature's move?

They are subjective probabilities.

They are subjective for the player that is facing the uncertainty of the other player's type.


In a bayes-nash game, how do you know which strategy the unpredictable player will play?

Because it is likely that they will have a dominant strategy in each game.

So then you can assign p to one state and 1-p to the other.

You then know what the payoff for the predictable player will be from the other state because the other player will play their dominant.


What is the assumption of common prior?

It is where the unpredictable player knows the estimate of p from the predictable player.


Explain the assumptions behind the extensive form game in Bayes-Nash

-Dean knows what type he is.
-James' nodes are connected, he doesn't know what game type is being played.
-Dean knows this about James.


Describe the concept behind auctions.

Describe the private value model and the pure common value model.

There is asymetric information in auctions

Each player's bid is a function of their own information.

Players will maximise their payoff given the strategies of others and their beliefs of the other player's information.

Private value model is where each bidder knows how much they value the item for sale.

Pure common value model is where bidders have different private information about the actual value. But the actual value is the same for everyone.


Explain the different types of auctions.

First price Sealed bid: The highest bid wins

Second price sealed bid: highest wins but pays second highest.

English: Price ascends for highest bidder

Dutch: Price falls until someone bids.


Explain moral hazard and principle agent

What is optimal for the agent may not be optimal for the principle. The principle is not on hand to monitor the other.

The principle needs to design a contract that incentives T to work in a way that benefits P.

P wants to maximise utility subject to contrasints about T's behaviour.


What are the two constraints in the principle-agent problem.

-Participation: T will only participate if he gets the reservation utility.

-Incentive Compatibility: T chooses the best contract out of all of P's offers.


What are the assumptions for a principal agent?



Write down the incentive compatibility constraint

Write down the participation constraint.

What about if the game has asymmetric information?


or L




When is the Principle-Agent game a social optimum outcome?

What actually is the social optimum outcome?

It is where P & T are both risk neutral.

P offers, T accepts and puts forth H effort.


When will a risk adverse agent not accept an offer?

If the constraint depends on them acheiving some target revenue.

This is because a contract of this type will expose him to risk.

If P is risk neutral, he must insure risk adverse T against unlucky outcomes.

This leads to a loss of efficiency.

Therefore not the social optimum.


What is meant by a rational player?

A player who is aware of their own, payoffs,preferences and constraints w/r to their own actions.

They then will aim to get maximal payoff according to their own criteria.


What is strategic thinking?

It is where you take into account what the other player is thinking. You must also take into account that they are considering what you are thinking.


Draw out the first form of Prisoner's dilemma. What concept would you try to explain by using this game

You can show an IEDS solution by using this game. See notes for game and solution.


What is meant by a dominant strategy.
Just remember to use the opposite for a dominated strategy.

A strategy that is strictly or weakly preferred over all other strategies, regardless of the strategy choice of the other players.


Draw the game of the Battle of the Sexes, What is this game usual for illustrating?

See Notes for game. You can show that there are two Nash Equilibria in the game.

Note that there is no Dominant Strategy equilibria.


What is the name for strategies which survive IEDS? What is important to remember about them?

They are called rationalizable strategies. However, this is only if they are strictly dominant strategies.


Give an overview of the IEDS assumptions.

-Both players are rational.
-Player 1 knows that player 2 is rational.
-Player 2 knows that player 1 is rational.
-Player 1 knows that player 2 knows that player 1 is rational.
-visa versa.
-Player 1 knows that player 2 knows that player 1 knows that player 2 is rational.

For every extra elimination, there is an extra level of assumption.

It is called the layers of rationality

This means:
Each Player's strategy is consistent with their rationality.

They will maximise their payoff with conjectures to other player's strategies.

If i conjectures that j will play sj with a positive probability, sj will maximise j's payoff with respect to a conjecture made by j about other player's payoffs.


What is a Nash equilibrium?

A strategy combination is a nash equilibrium if each player's strategy is choice is a best response against the opponent's choice in that combination.


When is a strategy choice a best response?

A strategy choice is a best response if it yields the highest possible payoff against the opponent's choice.


Will a Nash always exist?

Yes, but sometimes it will only exist if it is in a mixed-strategy context.


What is the relationship between Nash Equilibria and Dominant Strategy Equilibria?

What is important about this?

Every DSE is a Nash but not all Nash are DSE.

This means when you are choosing between Nash you can simply discard the weakly dominated strategies.


How can you choose between Nash Equlibria?

-Preplay negotiate: make negotiations before you play

-Convention: If you have played the game before and have decided on a certain Nash, you are likely to reach that Nash again.

-Focal Point: If one nash equilibria gives a higher payoff as opposed to another then the higher payoff may achieve the necessary convergence of expectations.


What is the conjecture of players in the Bertrand-Nash game>

Each firm assumes that the other will act in a way to keep the price that they sell at fixed.


Derive bertrand solutions under
-homogenous goods
-differentiated goods

Draw the bertrand curve

see notes
hint: equal prices, half of the market.

See notes


How can you solve the reaction functions to find the Nash solution for bertrand and cournot>

What do Cournot set and what is the graph?

What do bertrand set and what is that graph?


Set quantity; Qa=QB

Graph hits both axis

Set Price Pa=Pb

Graph is on one axis and then moves out.


What is the outcome of the bertrand game>

A and B will keep undercutting each other until they end up setting p=mc.

The perfect competition prices.

This is the welfare optimal solution.


What is the conjectures of Cournot?

Each firm assumes that the other will act in a way to keep their prices fixed.


What is your aim when calculating the cartel solution?

You are trying to maximise the profit for both firm 1 and firm 2. So add the two profits together and then solve with respect to one of the firms. From here you can set A = B and find the price or quantity for cartel.


Where are mixed strategies best used?

Where one player wants a coincedence of actions and the other wants to avoid it.

When you have two pure strategy nash and the refinement strategy is not working.


How do you know where to plot the variables

If you see where q and p are plotted. Set them equal to 1. This means one variable will be played pure, plot that next to the one.

The other variable will be played at zero so plot that there.


Solve an example of a mixed game

see notes


When can you show a mixed strategy game.

Only if you can see that the actions appear random. Even if the player is sure what they areplaying


What is a node called in a perfect information, sequential move game?

It is called a singleton.


Can you name all parts of the extensive form game?

See notes, an information set is either one node or a selection of nodes.


What is the outcome of the centipede game>

it is just the first decision. See notes for diagram


What is meant by sequential rationality?

A player will be sequentially rational if at the node that they move they maximise their utility with respect to the fact that they are at that node.

Even if that particular node is precluded by their own strategy.


What is needed to find the backwards induction or rollback equilibrium?

The 'common knowledge' of sequential rationaility.


What is meant by a credible threat?

It is a threat where it is in the player's own interest to carry out the threat when given the option.


Solve a game using backwards induction.

See notes


What is the evidence regarding rollback solutions

There is no systematic evidence of rollback evidence.

There are classroom and experiment scenarios which have shown the opposite prediction of the theory.


What is the relationship between backwards induction and IEDS?

In the extensive form, Backwards induction would be the same as IEDS in the assocaited strategic form of the game.


What is one thing to watch when doing IEDS solutions?

You should notice that if there are a number of strategies left which are all the same thing in the extensive form, then you can say that the game is solveable.


How do you work out how many strategies that each player has?

You would find the no of decision nodes times by the number of decisions at each node


What do you do for Cournot-Stackelberg?

Solve firm 2 first then sub in to firm 1 and solve.

Remember Cournot-Nash will have equal quantities so you can work this out and solve using firm 2 solution and compare.

Make sure to work out price profits and quantities.

Remeber to mention that the products are slightly differentiated and sequential selection of quantity.


What do you for Bertrand-Stackelberg?

You solve for firm b first and then sub in to firm 1. But make sure to get firm 1's solution in the simplest form as possible before you times out the brackets. This will ensure you are likely to get the right solution.

Make sure to work out price profits and quantities.

Remember to mention that the products are slightly differentiated and sequential selection of price.


In what sequential games does the player 2 have an advantage

In the coke and pepsi game 1 or 2.

Pepsi has the highest average payoffs but I don;t think this is the reason.

Bertrand Stackelberg, because player 2 can undercut the price meaning they make a higher profit.


Define an information set

Informatio sets are a collection of decision nodes where;
-The player has a move at every node in the set
-When the play of the game reaches the information set, the player whose turn it is to move doesn't know which node in the set has been reached.
They don't know what strategy the player before them had played.


What is a subgame?

They will only start at a single decision node, they will not start at information sets.

If an information set is within the game it will be included in the whole subgame.


When is a nash equilibrium subgame-perfect?

A Nash equlibirum will be subgame-perfect if the player's strategy constitute a nash in every subgame.


Can backwards induction be used if a game is not perfect information?

No it can not, solve which one of the decision node choice would be played in the information set and solve from here upwards.


What are the assumptions for Subgame Perf4ect Equilibria?

(i) Player 1 is rational
(ii) Player 2 is sequentially rational
(iii) At the node he moves, player 2 know (i)
(iv) Player 1 knows both (ii) and (iii)


How do you tell if a nash outcome is likely?

You check that is it likely that choice will be made assuming the post entry game will happen


What happens in Finintely repeated games if;

-There is SINGLE (unique) Nash in the stage game?
- If there are numerous Nash in the stage game?

Single Nash: There will be a unique subgame perfect equilibrium. This means that the nash will just be played over and over again.

Mutiple Nash: The subgame perfect equilibrium may not be one of the Nash Equilibrium.


How can you work out the payoffs of a finitely repeated game?

You just keep adding the payoffs of each stage game.


What will the payoff be in a finitely repeated bertrand duopoly?

If it was one shot both firms would go P=MC because they don't want to be undercut.

Because you know when the game is going to end because it's finite, so you should play the game as a 1-shot game.

So your opponent will not undercut you at any point.


What is the name of the strategy that is used by players in infinitely repeated games?

Grim-Trigger Strategy.


If there is a unique nash in an infinitely repeated game, what is the Subgame Perfect Equilibrium?

There will be many. When you are asked for the discount factor you say that the SPNE is sustained at the value past what the discount factor is needed to keep them somewhere.


What is the intuition behind the grim-trigger strategies?

Firms take into account not only the one period gain from the profit of deviation but the loss in future profits from deciding to retaliate.